Skip to main content

SCENE Lab

SCENE Lab

User-centric and User2User-interactive Data Analytics and Data Science Lab

Lab Lead

Professor A.I. Cristea

Collaborators

Dr. J. Wang

Dr. L. Shi

Dr. C. Stewart

Abstract

The User-centric and User2User-interactive Data Analytics and Data Science (SCENE) Lab promotes the move from the traditional computer-driven data analytics, to the ubiquitous computing data analytics, in the areas of personalisation and adaptation to uni-, bi- or multi-user environments. SCENE allows hands-on research on personalisation and user-interaction based on data-rich devices and applications for applied areas, such as society, education, business, law, health. SCENE explores future belonging to virtual- (VR), augmented- (AR), extended- (XR), mixed- (MR) and cross-reality solutions (jointly called ‘ÍCR’, as representing subsets of CR), applying personalisation to ÍCR, as in augmented, personalised reality, or virtual adaptive reality, to simulate the expected ubiquitousness of the user-centric view. Moreover, SCENE scrutinises how this user-centricity can be supported in dyads, triads, tetrads, and other small-scale user formations. SCENE contains kit such as Hololenses, VR headsets Valve Index, 3D Printer, 3D Scanner, BCI Headsets (NeuroSky, EMOTIV), ultrahaptic devices (STRATOS).

The SCENE lab welcomes PhD student applicants, academic visitors, PhD student visitors and any collaborators. Please contact lab lead for more information.

Background

We have for a long time now, for many centuries, moved from the old-fashioned concept of encyclopaedic brains to the notion of extended memory– extending our memory with devices, such as books, microfiche, computers, laptops, mobile phones, etc. Furthermore, new generations are raised in a multi-source, multi-channel and multi-tasking world and can not only handle, but expect an information-rich environment. In spite these developments, and the fact that data ‘is the new oil’[1], data continues to be generated and is needed to be processed in a human-oriented way. Personalisation of data, based on data analytics and visualisation has been and is a strong and growing research area. However, this research cannot be confined to our computers anymore – augmented reality is already here, it is ubiquitous, and is rapidly expanding. This rapid progress affects all layers of society, and all application areas.

All the time, new devices are being created, such as the HoloLens, transparent projector screens, to name but a few, with each type of device opening up a new world of interactivity.

Thus, the existing expertise in personalisation and adaptation on one hand, and Artificial Intelligence and Data Analytics on the other, requires expansion and reoriented towards the area of augmented, personalised reality and, to some extent, virtual adaptive reality, to simulate the expected ubiquitousness of the user-centric view. Moreover, it is useful to understand how this user-centricity continues to be supported in dyads, triads, tetrads, and other small-scale formations.

Sample Research Questions

The SCENE Lab researches personalisation and user interaction based on data-rich devices and applications for applied areas, such as society, education, business, law, health.

It is important to move from the computer-driven data analytics to the ubiquitous computing data analytics in the areas of personalisation and adaptation to uni-, bi- or multi-user environments.

The research should address several research questions, examples of which are:

  • How can wide-scale, rapid and pervasive data collection be personalised in real-time to lead the delivery of a context driven digital environment?
  • How can augmented learning content be perceived by learners, when compared to traditional learning content?
  • How can pairs of people change their discourse when the topic is augmented?
  • How can dyads, triads, tetrads, etc., leader-less and leader-based groups etc. function in augmented reality environments, when compared to traditional environments?

[1] https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data

Sample Projects & Outcomes

BETTER: An automatic feedback system for supporting emotional speech training

○Wynn, A., & Wang, J., 2023. BETTER: An Automatic feedBack systEm for supporTing emoTional spEech tRaining. In Artificial Intelligence in Education. Durham Research Online

○Wynn, A. T., Wang, J., Umezawa, K., & Cristea, A. I., 2022. An AI-Based Feedback Visualisation System for Speech Training. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium (510-514). Durham Research Online

Semi-automatic literature review for Biosciences

○Hodgson, R., Wang, J., Cristea, A.I., Matsuzaki, F. and Kubota, H., 2022. A Topic-Centric Crowdsourced Assisted Biomedical Literature Review Framework for Academics. EDM’22. A Topic-Centric Crowdsourced Assisted Biomedical Literature Review Framework for Academics (educationaldatamining.org)

Personalised Journalism for the Media Industry

○Hodgson, R., Wang, J., Cristea, A., 2022. Hybrid Weighted Retrieval of Twitter Users for Temporally Relevant Full-Text Querying in the Media Industry. In 13th International Congress on Advanced Applied Informatics Winter (IIAI-AAI-Winter)Hybrid Weighted Retrieval of Twitter Users for Temporally Relevant Full-Text Querying in the Media Industry – Durham Research Online

Past related Events

•AIED 2022: The 23rd International Conference on Artificial Intelligence in Education, 27-31 July, Durham University, UK AIED2022 (durham.ac.uk), Core A international conference tool place in Durham

•EDM 2022 is the 15th iteration of the Educational Data Mining Conference Series EDM2022 (durham.ac.uk), international conference tool place in Durham

Intelligent Tutoring Systems – 17th International Conference, ITS 2021, Virtual Event, June 7–11, 2021, Proceedings | Alexandra I. Cristea | Springer