Use Cases

Use Case 1: Handsfree operation of autofocal glasses by LFI-sensor-based eye-tracking

Motivation and Objectives of the Use Case 1

  1. Improve the handling, safety and comfort of autofocal glasses while operating during daily activities, e.g. driving in a car or simply walking downstairs.
  2. Eye-tracking system shall monitor eye-movements and detect reading behavior based on known movement sequences and trigger certain system controls based on detection of these sequences.
  3. LFI sensor-based eye-tracking systems are the only technology to fit into all-day wearable and aesthetically appealing smartglasses.
  4. Use Case implementation will showcase the hands-free operation of autofocal glasses by eye-movements.
  5. People wearing the glasses will be able to read small print intuitively and navigate their surroundings with appropriate visual acuity.

Approach and Demonstrator:

  • The demonstrator will be based on a common glasses frame with at least 2 integrated LFI sensor-based eye-tracking systems on both eyes. To cater to anatomical variations in the human populations 3 models with different dimensions are foreseen.
  • The integrated eye-tracking system will monitor eye movements and continuously compare the data with known sequences of eye movements associated with reading. A decision tree for activating and deactivating the tunable lenses will be implemented, tested, and optimized.
  • The glasses will be tested by various users to validate its more intuitive improvements in comparison to the current approach (VOG/EOG) results.

Aimed Results:

  • Autofocal glasses demonstrator with integrated LFI sensor-based eye-tracking system.
  • Hands-free operation of the glasses, with the ability to read fine print when desired.
  • Reliable autofocal switching (>95% cases) based on viewing angle determined by eye- tracking.
  • Robust against downwards gaze when absent of the intent to read.
  • Gaze angle detection frequency > 100Hz.

 

Use Case 2: Cognitive Load & Attention tracking

Motivation and Objectives of the Use Case 2

  1. Development of an algorithm for cognitive load detection based on multiple features, e.g., pupillary information, microsaccades, visual scan-path analysis etc.
  2. Integration of the implemented algorithm into an adaptive interface.
  3. Evaluation against video-based eye-tracking.

Approach and Demonstrator:

  • Assessment of cognitive load by consideration of several data streams derived from the novel wearable like pupillary information and eye movement features.
  • Utilize fixation, saccade, and microsaccade related features in the model:
  • Number of fixations per second
  • Saccade characteristics
  • Proposed technology's high sampling frequency enables detection of microsaccades:
  • Microsaccades are small involuntary eye movements during fixation
  • More visually demanding tasks increase microsaccade frequency
  • Develop a method for classifying cognitive load with:
  • Robust estimation
  • High classification accuracy
  • Generalizability across participants
  • Potential for real-time application 

Aimed Results:

  • Practically viable model for cognitive load detection allows assessment of the user’s cognitive load across tasks and subjects.
  • Method capable of running in real-time based on a sliding window approach.
  • Model which allows adaptation of interfaces.
  • Accurate and robust cognitive load detection with over 90% of accuracy when three-levels of cognitive load are considered, surpassing the state-of-the-art even in the high-frequency eye-tracking setup.
  • Online and generic methods for cognitive load detection that will improve user experience, beyond the proposed eye-tracking system that will be evaluated with other eye-tracking datasets.
  • Harmonious synchronization of cognitive load detection with adaptive user interfaces.
  • Seamless synchronization of pupillary information with microsaccadic eye characteristics.

 

Use Case 3: 

Motivation and Objectives of the Use Case 3

  1. Ensure privacy in eye-tracking systems by avoiding the use of camera-based solutions that capture real images of the eyes, which pose legal and ethical challenges.
  2. Demonstrate that VIVA’s eye-tracking technology, which is not image-based, can mitigate privacy concerns while maintaining high performance in person identification tasks.
  3. Investigate whether accurate person identification can be achieved using non-intrusive data streams instead of conventional scene or eye cameras.

Approach and Demonstrator:

  • The demonstrator will utilize the VIVA eye-tracking system, which operates without traditional cameras, ensuring privacy by not capturing real images of the eyes.
  • Multiple identification tasks will be conducted at different time intervals to evaluate the consistency and reliability of person identification based on eye movement data.

Aimed Results:

  • Privacy-compliant eye-tracking demonstrator capable of performing person identification without storing real images of the eyes.
  • Development of privacy-preserving measures in case identification is achievable under experimental conditions.
  • Validation of the VIVA eye-tracking system’s ability to operate effectively while ensuring user privacy.