Skip to Content

Event Listings

Human-Centred Visual Learning and its Applications

The Computer Vision @ Queen's community invites staff to its next seminar, 'Human-Centred Visual Learning and Its Applications'

Computer Science Building
Date(s)
February 8, 2024
Location
Computer Science Building, Allstate Software Studio (CSB02.043)
Time
12:00 - 13:00

This will be followed by a lunch and networking.

The guest speaker is Dr Hyung Jin Chang (University of Birmingham).

For catering purposes please indicate your attendance here.

Abstract

The progress of artificial intelligence relies on humans, both as teachers and beneficiaries. In my research on human-centred visual learning, the primary goal is to create vision-based algorithms that prioritise the usability and usefulness of AI systems by addressing human needs and requirements. A crucial aspect of this work involves comprehending human body pose, hand pose, eye gaze, and object interaction, as they provide valuable insights into human actions and behaviours. During this talk, I will discuss recent studies conducted by my research group, covering topics such as hand-object pose and shape estimation, 3D facial image rendering for gaze tracking, and body pose estimation. Additionally, I will introduce intriguing applications that leverage human-centred vision methodologies.

Speaker Bio

Dr Hyung Jin Chang is an Associate Professor in the School of Computer Science at the University of Birmingham and a Turing Fellow of the Alan Turing Institute. Before joining the University of Birmingham, he was a post-doctoral researcher at Imperial College London and received his PhD degree from Seoul National University. His research combines multiple artificial intelligence areas, including computer vision, machine learning, robotics, and intelligent human-computer interaction. His research career started focusing on theoretical underpinnings of machine learning, and it has converged on applying these aspects to more practical problems in visual surveillance, HCI, and robotics, with an emphasis on estimating human eye gaze, hand pose, body pose, kinematic structure, and 6D object pose etc. Recently, his research has focused on exploiting and making advances in computer vision and deep learning techniques to move toward intelligent human-robot/human-computer interaction based on visual data. Moreover, he extends his expertise to interdisciplinary research, where he applies cutting-edge deep learning technologies to fields like brain imaging analysis for diagnostic purposes and enhancing rehabilitation exercises.

This event is supported by the Queen's Agility Fund.

Department
Audience
Staff
Add to calendar
Subject/Theme
Science / Technology