Research

Generally, my research intersects between the area of Human-Computer Interaction (HCI) and Machine Learning (ML). In particular, I investigate how ML can be used to enhance and create new and novel interaction techniques in devices such as wearables, smartphones and tablets. In the past, I investigated an interaction technique that uses back area/side of the device to sense and model users’ grip patterns to enhance touchscreen touch target, user identification and detect cognitive errors.

I am currently working with my PhD student to investigate computer vision (CV) techniques to estimate tree species from high-resolution canopy UAV imagery. Besides ML, I also work in a number of classic HCI areas such as interactive interaction particularly on touch gestures and usability testing.

Research Projects (current)

Estimating Tree Species From Forest Canopy Aerial Vehicle (UAV) Imagery

To be updated.

Investigating Touch Gestures to Support Collaboration for Cross-Device Interactions

To be updated.

Evaluation and Comparison of Online Usability Studies in Different Environment Settings

To be updated.

Research Projects (past)

Predicting Touch Target from Back-of-Device Interaction

To be updated.

User Identification from Back-of-Device Grip Changes

To be updated.

Detecting Cognitive Errors Using Back-of-Device Grip Modulation

To be updated.