Projects
We welcome all proposals that address the general themes of "AI, Social Justice and Public Decision-making" or "Science, Governability and Society."
We ask that you read this page carefully and consider a proposal that either addresses or draws inspiration from the projects below. Each of these projects is an attempt to address our general themes. You are not tied to these projects but we do ask that you give serious consideration to the opportunities that emerge from the interdisciplinary Programme's focus and character.
Applicants are advised to make early contact with one of the members of the LINAS team in order to discuss and develop a proposal that fits within both themes of LINAS and adopts its interdisciplinary approach.
-
Responsible Algorithms for Platform Cooperativism
First Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Philosophy and Politics
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI, Politics
Project Overview
There has been an abundance of “platforms” in the modern era. These platforms, which include services like Uber, JustEat and Amazon, establish a computing platform (website, mobile apps, and protocols) to facilitate the sale of goods and services. Most platforms that we are familiar with operate on a profit-oriented model, where the commitment to shareholders to improve profits trump over other considerations. This plays a role in designing algorithms that decide the price for sellers, the matching between buyers and sellers, and remuneration for gig-workers who ‘partner’ with the platform. This design-for-profit often works out in a way that there is significant detriment to remuneration for gig-workers, and the gig-economy that these platforms power has been criticized for significant drop in real wages and working conditions, and, for reducing the workers’ power to raise collective concerns in implicit ways.
There have been recent alternatives to platform capitalism, prominent among them being platform cooperatives. These platforms are cooperatively owned by the workers, and democratically governed by the one-person-one-vote principle. Cooperatives have been around in traditional sectors for quite a long time, and engage significant numbers of people, especially in countries in the global south (e.g., India). Participants in the cooperative model are motivated by giving-back, altruism and a sense of duty (apart from financial benefit considerations), and these are natural to expect in geo-localized rural co-operatives such as diary cooperatives. Platform cooperatives seek to reinvent this model to work effectively within the context of the web.
Our focus within this project is on identifying the overarching considerations that make platform cooperatives different from platform capitalism and develop enabling algorithms that focus on the particular needs of platform cooperatives. First, we will investigate key algorithms (e.g., matching, remuneration) within platforms and scrutinize their design for suitability for usage within platform cooperatives. For example, to what extent are profit motives being privileged over gig-worker’s considerations, and how much is the size and shape of algorithm outcomes beneficial to the shareholder vis-à-vis other stakeholders. Second, we will re-design some of the algorithms by re-orienting them towards encouraging gig-worker cooperation (rather than competition being the sole focus). This would involve embedding pragmatic fairness considerations in each task handled by algorithms, such as matching, remuneration and reputation. Third, we will, towards the end of the project, work closely with a platform cooperative (links to be built during the earlier part of the project) towards piloting these algorithms.
Besides being a technical project focused on algorithm development, this has a strong social, human and political dimension, and we expect to build key algorithms to enable cooperatives within the digital space that is currently dominated by high-tech for-profit companies.
References
Sophie Atkinson, “‘More than a job’: the food delivery co-ops putting fairness into the gig economy”, The Guardian, May 2021
Platform Cooperativism Consortium, https://platform.coop/
Wikipedia Article on Platform Cooperativism listing several platform cooperatives, https://en.wikipedia.org/wiki/Platform_cooperative
Rochdale Principles, a set of ideals for the operation of cooperatives. Ref: https://en.wikipedia.org/wiki/Rochdale_Principles
- The Psychological Consequences of Perceived Algorithmic Injustice
First Supervisor: Gary McKeown, School of Psychology
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Thematic area: AI, Social Justice and Public Decision-making
Subject area: AI and Humans: Reckoning and Judgement
Project Overview
Increasingly decisions about people’s lives are being made by algorithms. Companies now exist that offer automated recruitment interviews that use emotion recognition software to judge people’s emotional intelligence. Similar emotion recognition algorithms are also being used to monitor prison populations in the criminal justice system of some countries. Online reviewers are assessed for the veracity, sentiment and value of their reviews, and there are attempts to automatically detect the personality characteristics of a reviewer. Reviews can be gathered from formal reviews websites and communities or scraped from social media as is typical with sentiment analysis companies. This project seeks to assess the impact of algorithmic assessment on humans through a series of psychological experiments, in the laboratory and where possible in real world settings. There is considerable debate in the world of emotion psychology concerning the degree to which we understand the nature of people’s emotions (Feldman Barrett et al., 2019). However, despite the lack of consensus within psychology about how we should interpret linguistic and social signals for their emotional meaning, the technological and computer science domains feel little compulsion to address the complexity or nuances of the arguments within psychology. A typical approach within the world of affective computing is to choose a psychological theory “off the shelf” and implement that theory uncritically. Theories that allow simple classification are preferred over more complex theoretical approaches as classification algorithms are much easier to implement within typical supervised machine learning paradigms. In assessing the sentiment of reviews and attributing personality characteristics to reviewers there is more common use of regression-based techniques, but often they are making assessments from a small amount of evidence and minimal context. In both contexts decisions that influence people’s lives can be based on weak theoretical foundations.
This project seeks to experimentally manipulate the style of algorithm, between classification and continuous dimensional assessment, and the degree to which the assessments match the human understanding of their own behaviour using classic experimental psychology paradigms. In the first two experiments the provision of feedback from mock automated job interviews in both services sector employment and graduate level employment will be manipulated. The feedback will either reflect or deviate from psychometric scores provided by the participants as part of the mock interview process. The degree of satisfaction or dissatisfaction with both the feedback and the providers of that feedback will be used as the experimental measures Additionally, there will be evaluations of the automated assessment process in terms of perceived agency, level of influence, level of frustration and trust in the process on the part of the participants. Two further experiments assess automated reviews on both reviewers and participants engaged in services style employment. The first experiment will provide feedback to reviewers of products, through automated assessment of their reviews; the feedback will provide a quality score for the review, a score of the emotional tone and its distance from other reviewers and a judgement of the reviewer’s personality ostensibly based on the reviews but drawing from psychometric measures to manipulate the degree to which the judgements concur with the psychometric personality scales. The second experiment will engage participants as workers who have been reviewed using an automated judgement system manipulating the review to be congruent or incongruent with performance. Two final experiments will seek to move these findings from the laboratory and into the field addressing the issues with companies who engage or are seeking to engage in this kind of automated assessment of people.
-
Designing Novel Facial Recognition to Empower and Support People Living With Dementia
First Supervisor: Dr Richard Gault, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Dr Kevin Brown, School of Law
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI, Computer Vision, Deep Learning, Privacy
Project Overview
Facial Recognition has been used in recent years as an authentication method for digital log in to electronic systems. Typically, such systems hold an increasing amount of personal data and allowing mistaken access through this authentication method can have significant real-world implications. From one perspective, this form of authentication provides a very accessible way for certain demographics to log in to digital services while other authentication methods may be challenging (e.g. challenges with typing, remembering passwords, or verbal authentication). However, such facial recognition models must be robust to changes in personal appearance [1], adversarial attacks [2], a wide range of facial perspectives, varying image resolution, and lighting conditions. This project will focus on the use of facial recognition as an authentication method for people living with dementia.
Dementia is an umbrella term for a range of diseases and the symptoms of dementia can vary across individuals. Whilst trying to promote independent lifestyles for people living with this disease there often comes challenging conversations amongst families and care providers on the issue of consent, privacy, and autonomy. This complex, sensitive, and multifaceted subject must be captured and reflected in any assistive AI technology. To date, facial authentication has not accounted for these socio-legal factors. Mechanisms such as lasting power of attorney were established in 2007 in the UK in relation to decision making in the areas of health and welfare, and property and finance. With the increasing use of AI in daily life and machine-based decision making, the development of novel AI systems and corresponding socio-legal reflection are needed so that facial recognition models are endowed with principles that empower, support, and protect someone living with dementia. This project will see the successful candidate work in an interdisciplinary team consisting of expertise from Computer Science, Law, and Nursing who will support in the exploration of technical, legal, and social considerations of this area.
This project will aim to develop novel AI methods for facial recognition that are robust to complex changes in appearance and develop a suite of statistical metrics to bridge the gap between quantitative evaluation of computational models and key qualitative socio-legal factors. To achieve this aim, the following objectives will be achieved:
- Develop novel facial authentication methods that are robust to a range of real-world challenges,
- Evaluate the socio-legal factors that surround facial authentication for people living with dementia,
- Develop novel metrics for model evaluation and training that preserve socio-legal factors in an accessible and interpretable manner.
[1] Taigman, Yaniv, et al. "Deepface: Closing the gap to human-level performance in face verification." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
[2] Dong, Yinpeng, et al. "Efficient decision-based black-box adversarial attacks on face recognition." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
Other relevant information
This project is part of a suite of projects promoting dementia friendly communities. You will have an opportunity to work in partnership with people living with dementia through the empowerment organisation Dementia NI/
-
The Responsible Use of Generative AI When Working With Creative Stories of Lived Experience
First Supervisor: Austen Rainer, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Paul Murphy, School of Arts, English and Languages
Note: we will also engage Dr Anthony Quinn, potentially as a third supervisor. Dr Quinn is currently at QUB on a Fellowship from the Royal Literary Fund.
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI and Humans: Reckoning and Judgement
Project Overview
Aim: The aim of the proposed project is to evaluate the degree to which already-deployed generative AIs, such as ChatGPT, (dis)empower the writer, reader, and translator in the creative expression and interpretation of lived experience.
Interdisciplinary dimension: Generative AIs have been described as “stochastic parrots” that formulate responses to prompts using statistical processes in which there is no (or very limited) traceability to the source corpus. When it comes to using generative AI to support self-expression through creative writing, there is the risk that this “statistical parroting” might encourage the writer to conform to statistically typical expressions, e.g., formulaic literary expression based on the nature of the source corpus, with all its subtle cultural and literary biases. One potential consequence is the writer is disempowered: their voice – the ways in which they uniquely and creatively communicate in writing – is lost in a kind of statistically-aggregated ‘chorus’; but their thinking too – the way they use language to make sense of – is ‘corrupted’ by a kind of statistically-normative ‘sense-making’. These consequences are particularly significant when expressing lived experience, e.g., in memoir – where the very act of expression can be an act of empowerment, even emancipation – and also particularly significant where the writer has limited writing experience, lacks confidence, or is constrained in other ways (e.g., age, disability) and thus turns to generative AIs, like ChatGPT, for assistance.
Similar risks and consequences arise for readers and for translators. For example, with Mahatma Gandhi’s autobiography, Gandhi’s native culture does not prioritise the individual’s lived experience in the way implied by the word autobiography (hence the book’s qualified title, The Story of My Experiments with Truth) yet an Anglo-Saxon cultural interpretation, and therefore generative AI ‘built’ on a corpus of Anglo-Saxon literature, would subtly prioritise interpretations centred on the individual. (Indigenous cultures, such as the Māori, also prioritise the community over the individual.)
Programme of Work: The project will bring together academics from the School of Arts, English and Languages, who have expertise in creative writing and memoir-writing (Dr Anthony Quinn), and the impact of arts-based interventions on public health (Dr Paul Murphy), with academics from EEECS (Prof. Austen Rainer) who have expertise in the empirical evaluation of algorithmic solutions.
The primary focus of the project is on the empirical investigation of the experiences of writers (stratified, e.g., novice writers, aged writers) who use existing generative AI to help them write creatively of their lived experience, and of the impact of using generative AI on the writer’s sense of empowerment and agency. A secondary focus – time and resources-permitting – would be on the experiences of readers and translators.
Envisaged impact: The project will raise the public’s understanding of the benefits and risks to (dis)empowerment through generative AI, help writers more accurately appreciate the strengths and limitations of generative AI in their (professional) work, and encourage greater awareness and appreciation amongst the AI community (including software engineers) of the limits of algorithmic solutions and, therefore, of the responsible use of generative AI in multicultural, global society.
Other relevant information
Briefly:
- There are emerging partnerships with Crescent Arts Centre in Belfast, the UK’s Royal Literary Fund (RLF) and professional writers in Northern Ireland and the wider UK.
- The project will benefit from prior work in the following areas:
- Two workshops that Dr Catherine Menon (University of Hertfordshire) and Austen Rainer have undertaken with emerging and professional writers. This was hosted by Crescent Arts Centre.
- Exploratory work that Dr Anthony Quinn (professional writer, RLF Fellow and former lecturer at QUB) has already undertaken with novice writers, with funding from the Irish Arts Council.
- Software engineering students using a memoir (My Name is Why, by Lemn Sissay) in their software projects, e.g., 1 x BSc student’s final-year project; 1 x MSc taught project; 1 x MEng research and development project; 155 x students working in teams.
- Cultural projects undertaken by Dr Paul Murphy and colleagues, e.g., Friel Reimagined.
- Also, Austen Rainer has recently completed an 80,000-word memoir, so is developing direct experience from both the software engineering perspective and the memoir-writing perspective.
- Law, Technology and Legal Practice: A Regional Case Study of How Technology Provides a Multi-layered and Differentiated Transformational Experience for Law Firms and Practitioners
First Supervisor: John Morison, School of Law
Second Supervisor: Thomas Schultze-Gerlach, School of Psychology
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI and Humans: Reckoning and Judgement
Project Overview
Algorithmically driven technologies are changing fundamentally the way that law is practised in all manner of ways. But this transformation is not experienced evenly. There is a world of difference between how ‘Big Law’ – those large, successful international legal firms - react to and develop technology and how medium sized law firms based in smaller cities and towns respond. Very small legal practices and solo practitioners are particularly challenged by the economies of scale required to compete even in traditional markets.
Drawing upon understandings of both law and technology that decline to see developments as occurring without reference to their immediate context, this essentially socio-legal project investigates the differential impact of technology on legal practice and the ways in which the legal market responds. While a comparative dimension would be welcome, Northern Ireland provides an excellent initial context for this investigation as within a separate jurisdiction the whole range of legal practice can be found inside a professional ecosphere where multi-national firms have a significant presence alongside more traditional medium and small sized general legal practices and a local bar.
Applicants will work in an inter-disciplinary context to acquire understandings of how legal technology is evolving before developing novel insights into how legal practice both sets the agenda for and responds to such developments.
Other relevant information
This project is closely related to an on-going research project being carried out by John Morison and Ciaran O’Kelly, with connections to a wider all-Ireland research team in the Republic of Ireland.
- Can We Have It All? Exploring the Trade-offs to Achieve Trustworthy AI in Future Networks
First Supervisor: Sandra Scott-Hayward, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Politics and Philosophy
Thematic Area: Science, Governability and Society
Subject Area: Cyber Security, 6G Networks, Machine Learning
Project Overview
Communication networks are expanding at a rapid rate to support an increasing number of users and services. Machine-learning-based solutions are fundamental to the design of future communication networks to meet the scale of connectivity. Where maximizing performance might once have been the singular goal of an ML-based system designer e.g., achieving the highest accuracy or the quickest decision, issues of security and challenges of explainability have expanded the design requirements. However, addressing security (e.g., adversarial training) can reduce the performance of an ML-based system. Similarly, the most explainable ML model might not be the most accurate.
Performance, security, and explainability are characteristics of Trustworthy Artificial Intelligence (TAI). TAI has many definitions ranging from the EU High-Level Expert Group on AI Ethics Guidelines for Trustworthy AI that it should be lawful, ethical, and robust, to the International Telecommunications Union (ITU) programme of work to standardize privacy-enhancing technologies such as federated learning.
Taking account of the broad characteristics of trustworthy AI, how do we trade-off between performance, security, and explainability to achieve trustworthy AI in our future communication networks? Considering the human as both end-user of the network and designer of the system, and considering the expansion towards non-terrestrial networks and associated political impacts, how do we approach this trade-off?
In this project, we will explore these questions in the application of ML-based network security solutions in 6G networks.
Other relevant information
This project aligns with the primary supervisor’s research with the CyberAI hub at the Centre for Secure Information Technologies (CSIT), the NICYBER DTP at CSIT, and activities with the IEEE P2863 Working Group on Algorithmic Governance of AI.
The student will have access to state-of-the-art network facilities and cyber range in CSIT.
-
Mitigating the Impact of Stellar Activity on Planetary Discovery
First Supervisor: Dr Ernst de Mooij, School of Mathematics and Physics
Second Supervisor: Professor Muiris MacCarthaigh, School of History, Anthropology, Philosophy and Politics
Thematic Area: Science, Governability and Society
Subject Area: Astrophysics / Ethics
Project Overview
The first exoplanet was discovered approximately 30 years ago. Since then, the field has made tremendous progress with thousands of new discoveries made using advancements in facilities. Dedicated surveys on novel instruments like HAPRS3 aim to push the discovery space to planets with roughly the same mass and surface temperature as the Earth. However, stellar activity is currently the main limitation for reaching these goals.
The aim of this project is to investigate novel methods to remove the impact of stellar activity on radial velocity observations. By expanding the ACID code developed within ARC to allow multi-line Least Squares Deconvolution the aim is to investigate the impact of separating lines that are strongly affected by stellar activity and lines that are not affected in measuring and correcting RVs.
The discovery of potentially habitable planets also raises ethical issues concerning how this knowledge is communicated to the public, as a planet being in the habitable zone does not guarantee it is habitable, something that can easily get lost in media interpretation. In addition, the pressure of showing exciting results could result in over- or mis-interpretation, especially as the competition to lead on new information can be fierce. To address this, there is a need to build a framework to allow for the responsible and transparent disclosure of research results while also protecting the integrity of researchers and their work.
- Tidal Disruptions of Stars: A Critical Look at Machine Learning Methods for Finding Hungry Black Holes
First Supervisor: Dr Matt Nicholl, School of Mathematics and Physics
Second Supervisor: Dr Thai Son Mai, School of Electronics, Electrical Engineering and Computer Science
Third Supervisor: Dr Marisa McVey, School of Law
Thematic Area: Science, Governability and Society
Subject Area: Astrophysics
Project Overview
Exploration of time-variable astrophysical phenomena has been revolutionised by the advent of Big Data. In particular, the Vera Rubin Observatory will discover millions of candidate cosmic transients per year. Beginning next year, the Time-Domain Extragalactic Survey will obtain spectroscopic follow-up observations of thousands of cosmic explosions discovered by Rubin, to identify the type of explosion and probe the underlying physics. In particular, it could classify hundreds of tidal disruption events (TDEs) of stars by supermassive black holes, leading to an order of magnitude increase in the known sample of these rare events.
However, selecting the right candidates from Rubin to observe with 4MOST is complicated, as other kinds of variable phenomena can occur in the centres of galaxies. Identifying the best targets, and quantifying the impact of our selection effects, will be critical to interpreting this legacy spectroscopic data set. In this PhD, the candidate would work on implementing algorithms to select potential TDEs for follow-up observations, using new and existing machine learning frameworks, to discover a large sample of TDEs in order to understand the statistics of the population and underlying physics.
Working with experts in computer science, the project will also include a novel approach to machine learning in astrophysics, in that the 4MOST spectroscopic data set will be large enough to cross-validate different selection algorithms to evaluate their biases – where do they agree and disagree, and what features are important for achieving a highly pure and/or complete TDE sample?
This project will also take a critical look at the ethical implications of the Big Data revolution in astronomy, by considering the environmental impact of processing data sets at the scale of Rubin and beyond. Using this data relies on supercomputing facilities for data processing and storage, both in the US and the UK, but the potential contribution to climate change has not been explored in detail.
Other relevant information
This project links with the ERC-funded KilonovaRank project in the School of Mathematics and Physics (PI Nicholl), aiming to discover rare events in Rubin data.
The Leverhulme Interdisciplinary Network on Algorithmic Solutions (LINAS) Doctoral Training Programme hosted an information session on 8 November 2023 at 1:00pm ahead of its deadline for applications.
For further information please view the video below.