Research

Generally, my research intersects between the area of Human-Computer Interaction (HCI) and Machine Learning (ML). In particular, I investigate how ML can be used to enhance and create new and novel interaction techniques in devices such as wearables, smartphones and tablets. In the past, I investigated an interaction technique that uses back area/side of the device to sense and model users’ grip patterns to enhance touchscreen touch target, user identification and detect cognitive errors.

Lately, my interest skewed towards computer vision particularly in image processing. I am currently working with my PhD student to investigate deep learning techniques to estimate tree species from high-resolution canopy UAV imagery. I am also work with my students to detect cancerous cells from computed-tomography (CT) scan images. Besides ML and deep learning, I also work in a number of classic HCI areas such as interactive interaction particularly on touch gestures and usability testing.

Research Projects (current)

(Ph.D) Deep Learning Techniques to Detect Cancerous Lung Cells from CT-Scan Images

To be updated.

(M.Sc) Predicting Students Attentiveness Using Facial Expressions and Electroencephalogram (EEG) signals.

To be updated

(Ph.D) Estimating Tree Species From Forest Canopy Aerial Vehicle (UAV) Imagery

To be updated.

(Ph.D) Investigating Touch Gestures to Support Collaboration for Cross-Device Interactions

To be updated.

(Ph.D) Evaluation and Comparison of Online Usability Studies in Different Environment Settings

To be updated.

Research Projects (past)

Predicting Touch Target from Back-of-Device Interaction

To be updated.

User Identification from Back-of-Device Grip Changes

To be updated.

Detecting Cognitive Errors Using Back-of-Device Grip Modulation

To be updated.