Consumer technology Technology

MIT researchers convert smartphone into eye-tracking device

Washington (ISJ) ? A team of MIT researchers,…

Washington (ISJ) ? A team of MIT researchers, headed by an Indian, develop software to turn an ordinary mobile phone into an eye-tracking device. Eye-tracking has been widely used for almost four decades in psychological experiments and marketing research, but required pricey hardware.

The software developed by researchers at MIT?s Computer Science and Artificial Intelligence Laboratory and the University of Georgia make existing applications of eye-tracking technology more accessible. In addition, the system could enable new computer interfaces or help detect signs of incipient neurological disease of mental illness. They will present the new system in a paper at the Computer Vision and Pattern Recognition Conference on June 28.

?The field is kind of stuck in this chicken-and-egg loop,? says Aditya Khosla, an MIT graduate student in electrical engineering and computer science and co-first author on the paper. ?Since few people have the external devices, there?s no big incentive to develop applications for them. Since there are no applications, there?s no incentive for people to buy the devices. We thought we should break this circle and try to make an eye tracker that works on a single mobile device, using just your front-facing camera.?

Khosla and his colleagues built their eye tracker using machine learning, a technique in which computers learn to perform tasks by looking for patterns in large sets of training examples. Their advantage over previous research was the amount of data they had to work with. Currently, Khosla says, their training set includes examples of gaze patterns from 1,500 mobile-device users. Previously, the largest data sets used to train experimental eye-tracking systems had topped out at about 50 users.

Khosla said, ?most other groups tend to call people into the lab,? to assemble data sets. ?It?s really hard to scale that up. Calling 50 people in itself is already a fairly tedious process. But we realized we could do this through crowdsourcing.?

In the paper, the researchers report an initial round of experiments, using training data drawn from 800 mobile-device users. On that basis, they were able to get the system?s margin of error down to 1.5 centimeters, a twofold improvement over previous experimental systems.

Since the paper was submitted, however, they?ve acquired data on another 700 people, and the additional training data has reduced the margin of error to about a centimeter.

To get a sense of how larger training sets might improve performance, the researchers trained and retrained their system using different-sized subsets of their data. Those experiments suggest that about 10,000 training examples should be enough to lower the margin of error to a half-centimeter, which Khosla estimates will be good enough to make the system commercially viable.

The researchers developed a simple application for devices that use Apple?s iOS operating system to collect their training examples. The application flashes a small dot somewhere on the device?s screen, attracting the user?s attention, then briefly replaces it with either an ?R? or an ?L,? instructing the user to tap either the right or left side of the screen. Correctly executing the tap ensures that the user has actually shifted his or her gaze to the intended location. During this process, the device camera continuously captures images of the user?s face.

The researchers recruited application users through Amazon?s Mechanical Turk crowdsourcing site and paid them a small fee for each successfully executed tap. The data set contains, on average, 1,600 images for each user.

Source: MIT

Illustration courtesy: MIT

Related posts

CSIR lab discovers anti-ageing enzyme, develops for commercial application

ISJ Bureau

DRDO Chief sacked, ahead of tenure

ISJ Bureau

IIT Madras develops new devices for hearing-impaired and people with motor disabilities

ISJ Bureau

Leave a Comment