Sayan Sarcar

Sayan Sarcar

Sayan.Sarcar@bcu.ac.uk
Millenium Point, Birmingham, B4 7XG, UK

Research Highlights

My research fits within the general goal of improving human individual abilities through developing intelligent systems using design and modelling practices, specifically focused on individual differences in users’ sensorimotor abilities toward technology adoption. Research topics include Computational User Interface Design, Aging & Accessibility, Input and Interaction, and Human Computer Interaction (HCI)

My past and present research domains are as follows. If you are interested in any of these research areas, do not hestiate to drop me an email sharing your further ideas on it.

Multimodal Text Entry

I developed mechanisms to design text entry interfaces suitable for mouse, finger and eye gaze based interactions in large and small display devices for English and Indic languages. The mechanisms include development of virtual keyboards, word-level prediction method and eye-gaze based text entry interface. I together with my colleagues have made multiple innovations in this area:

  • Optimized virtual keyboard layout design in Indic language based on mouse pointer movement distances and conducted experiments with three languages namely Hindi, Bengali and Telugu (some Bengali language online keyboards are available at here) (IJHCI 2013)
  • Designed novel touchscreen keyboard layout to accommodate large alphabet set overcoming 'fat finger' problem; the layout also contains character and word=level prediction with werror detection and correction support (ongoing)
  • Developed a new set of metrics to evaluate Indic language text entry system (CHI Workshop 2015)
  • Developed a non-expensive hardware for eye tracking and novel dwell-free eye typing interaction for virtual keyboard based English and Hindi text entry (APCHI 2013, India HCI 2013, CHI 2014)

Computational User Interface Design for Older Adults

I involved in a project of designing novel user interfaces for smartphones that suit to elderly people’s movement capabilities and cognitive functioning. The specific goal was to create scientific foundations for designing user interfaces for smartphones that better suit the individual abilities of older adults. I was part of a team which proposed a new concept called "Ability-based Optimization" which leverages mathematical models of user performance that predict improvements in user interface throughput and functionality. The outcomes are interface designs that improve usability, usefulness, and help older adults avoid expensive and socially stigmatizing specialized devices.

  • Developed a computational design approach for improving user interface designs for people with sensorimotor and cognitive impairments, including older adults (IEEE PC 2018, Interact 2017, ACM ITAP 2016)
  • Developed a model to understand visual search and learning in text entry (CHI 2017)
  • Developing a model on how people learn to search graphical layouts and how search adapts when the layout changes (ongoing)
  • Developing a model which adapts based on the human behaviour in the context of a touchscreen typing task (ongoing)

Improving the Learnability of Mobile Device Interfaces for Older Adults

I sought to better understand how older adults learn to use mobile devices, their preferences and barriers, in order to find new ways to support them in their preferred learning process.

  • Designed a mobile interface, namely HelpKiosk (HK), to support older adults’ learning to use mobile devices (ASSETS 2018)
  • Conducting research on how well the various features in HK (e.g., live view, demo video, instructions on a separate display) support learning (ongoing)
  • Exploring applying HK’s features to different devices (e.g., medical device) or apps (ongoing)
  • Conducting research on how different scaffolding strategy affect on users', specifically older users', leaning (ongoing)

Exploring Performance of Thumb Input for Pointing and Dragging

I counducted an exploratory user experiment results of one-thumb pointing and dragging task performance, based on three factors, namely mobile size, target size and postures.

  • Analyzed users performance based on basic HCI tasks on touchscreen depending on three factors namely target size, mobile size and posture (to be submitted)
  • Creating meaningful visualization of collected data to clearly understand effect of factors such as reachability, gripping, movement of phone over hand etc. (ongoing)
  • Upgrading the existing models on predicting next touch location and functional area of thumb on touchscreen (ongoing)
  • Extending the project for two-handed interactions of touchscreens

Designing Mid-air Two-handed Gestures

I sought to better understand how older adults learn to use mobile devices, their preferences and barriers, in order to find new ways to support them in their preferred learning process.

  • Investigated mid-air TV gestures for blind people (DIS 2016)
  • Exploring the concept of combination of gestures performed by two hands to improve memorability and ease of gesturing (ongoing)

Developing Framework to Understand Cultural Effect of Older adults’ New Technology Adoption

To fulfill this, I aim for two sub goals: (1) upgrade existing theoretical model of technology adoption (e.g., TAM) toward capable in dealing with the effect of cultural difference in newer technology adoption (e.g., voice-based interfaces such as Amazon Alexa) and extract meaningful new interface and interaction design ways through conducting workshops with older adults in Japan as well as the outside world (2) develop several technical improvement in voice-based assistant as a case study to validate the design implications and investigating the usability of such system for older adults.