Provided.

A University of Guelph engineering professor, along with one of his graduate students, recently wrapped up a collaboration with Google’s Advanced Technology and Projects in an effort to develop ways to improve security on mobile devices.

Professor Graham Taylor, an expert on machine learning, examined with his team how sensors within a mobile device could be used to identity the user using movements and locations, among other things, as a new form of authentication.

With the project now completed, the door has been opened for future collaborations with Google.
The Charlatan spoke with Professor Taylor to discus his team’s work in creating what could be a “password-free” world.

The Charlatan (TC): Your team collaborated with Google’s Advanced Technology and Projects team and looked into password authentication. How did this collaboration begin?

Graham Taylor (GT): The collaboration started when Google sent me an email and asked if we would be interested in talking about a potential collaboration. They were aware of some of the work I had previously done in an area called “multi-modal machine learning.”

TC: So what does that mean?

GT: It’s really all about getting computers to learn like humans. There’s certain things computers are very good at, like summing a whole bunch of numbers together or computing pi. We can write algorithms to do those sorts of things, and programmers are very good at writing fairly sophisticated algorithms, but when it comes to certain tasks like recognizing faces or a tune you hear on the radio, it is much more difficult to write a program to do that. That’s where machine learning comes in. Instead of actually writing a program to do some sort of task, you get the compute to basically automatically make the program. It’s taking some data and automatically creating these programs to do these amazing things.

TC: What have you done with it so far?

GT: We had done a project previously out of my lab where we were using this so-called multi-modal data to do gesture recognition . . . Google was aware that we were doing some work in that area and they knew that this cellphone problem required multi-modal learning because a cellphone has so many sensors. It had an image sensor, so it could take a picture of your face. It has a microphone, so it can hear what you are saying. It has a GPS sensor, it has the accelerometer and gyroscope that can detect motion of the device, and the list goes on. You can bring all these sensors together to make a decision on whether to unlock a phone or not.

TC: Is there potential the work that your team is collaborating on with Google right now could be found in their products in the future?

GT: There is certainly potential. We’ve essentially created this partnership between the university and Google such that if there were other researchers or our group wanted to collaborate with them it would be much easier to do now that the initial collaboration has been done. But the idea is that hopefully this would make it into a future Android operating system and Google has a team now that is working on taking on the results of this initial study and evaluating the feasibility of that.

TC: So what did you team specifically work on?

GT: Each of the teams focused on a different one of these sensors. While some groups were looking at things like typing patterns, other groups were looking at images. Our group was looking at the accelerometer and the gyroscope, so about how the phone is moving; whether your body movement gives a signal as to who you are. In other words, can you identify a person based on the motion of them holding the phone, or phone in their pocket while walking around, or any kind of motion on the device? The way that your phone moves is a very weak biometric, meaning that it’s not a great way to authenticate someone on it’s own. However, when you combine that signal with the keystroke patterns, with the images, with the GPS and all the other things that the teams were working on, that forms part of a bigger picture which makes it a really secure authentication mechanism.

 

TC: One of the technologies we’ve already seen in consumer devices would be the fingerprint scanner. Do you see that and perhaps, retinal scanning, being useful ways to authenticate a device?

GT: I have a fingerprint scanner on my phone now, that’s my preferred method of unlocking my phone. You could certainly take the fingerprint scanner and combine that with our system as another modality. However, the fingerprint scanner requires you to actually take a bit of time to press the device and align the fingerprint. Maybe sometimes, you don’t put it on correctly. That’s kind of the aim of the program, to get away from that. Even though the fingerprint scanner is pretty good from a security perspective, it’s in the same family as the swipe codes and the PINs, in that you still need to devote a bit of time to getting it to work. Essentially, what we are trying to do is to make a system that we call continuous authentication. The phone is always trying to figure out who is around it and it’s passive, it doesn’t require you to actually do anything to the device.