Projects
-
Open proposal for applicant development
Thematic area: AI, Social Justice and Public Decision-making
Subject area: LINAS Themes
Project Overview
As AI becomes more sophisticated, we will witness algorithms and machines making use of huge data sets, including personal data, in decisions relating to medicine and healthcare, law and government, finance, city planning and even within military arenas. The ethical, legal, political and sociological aspects of living with machines that have AI algorithms that are allowed to operate independently requires careful investigation. In parallel, the question of how we maintain reproducibility and accountability within the scientific process requires novel approaches as we face AI algorithms driving scientific discovery in unexpected directions. We must ensure that these scientific results are both reproducible and repeatable when the data sets employed are so immense that they cannot be independently published.
Applicants are invited to consider very closely the themes of the LINAS Doctoral Training Programme and offer a proposal for research that will investigate both the social justice dimensions and the implications for scientific method of an algorithmic future in which artificial intelligence (AI) impacts on a key area of science and society.
Additional Information
Applicants are advised to make early contact with one of the members of the LINAS team listed below in order to discuss and develop a proposal that fits within both themes of LINAS and adopts its interdisciplinary approach.
LINAS Team:
- John Morison: Professor of Jurisprudence. He has research interests in algorithmic government,autonomous decision-making systems and Smart Cities
- Mauro Paternostro is Professor of Quantum Information Science. He works in the formulation of a framework for machine learning-enhanced quantum information processing.
- Sandra Scott-Hayward is a Lecturer in Network Security with research interests in SDNFV security, predictive analytics for network security and performance optimized security implementation.
- Muiris MacCarthaigh is a Professor in Politics and Public Administration and co-director of the CITI-GENS DTP. His interests include the increasing role of technology in all aspects of public sector governance.
- Hans Vandierendonck is a Professor in HPC, Director of Postgraduate Research in EEECS, and co-director of the SCIDM DTP. His research interests include systems design for high-performance analytics and the precision and accuracy of algorithms.
- Roger Woods is a Professor in Digital Systems, Research Director of Data Science in ECIT and the Principal Investigator for Kelvin-2 HPC. His research interests are in high performance computing systems, data analytics and wireless systems.
- Mike Bourne is a Reader in International Security Studies and Director of Graduate Studies in HAPP. His research interests relate to the intersections of technology and politics in security issues such as border control, armed violence and arms control, and surveillance
- Deepak Padmanabhan is a Lecturer in EEECS and an expert in machine learning with interests in the ethics and fairness for data-driven artificial intelligence
- Teresa Degenhardt is a Lecturer in SSESW and Director of the ESRC linked Masters in Social Research. She works on criminological theory in the international sphere, technology development and border security, surveillance, conflict, migrant detention.
- Ciarán O’Kelly is a lecturer in the School of Law. His research is on accountability in public and private organisations. He is especially interested in how human rights instruments and frameworks are put to work in corporate social responsibility. He is co-chair on the permanent study group on Quality and Integrity of Governance in the European Group on Public Administration.
- Cathal McCall is a Professor in HAPP with research interests in Border Studies, cross-border cooperation, conflict and conflict transformation.
- Meg Schwamb is a Lecturer in Astrophysics. Her expertise is in big data for planetary astronomy, focusing in particular on applying citizen science/crowdsourcing to mine large astronomical and planetary science datasets
- Stephen Smartt FRS: Professor of Astrophysics. He works in automated surveys of the sky, applying algorithms and machine learning to digital images.
Or
Extended Supervisory Team:
Michelle Butler (SSESW), Ernst De Mooij (M&P), Hastings Donnan FBA (Mitchell Institute), Alessandro Ferraro (M&P), Alan Fitzsimmons (M&P), Giancarlo Frosio (Law), Chongyan Gu (EEECS), Jinguang Han (EEECS), David Jess (M&P), Anna Jurek-Loughrey (EEECS), Ayesha Khalid (EEECS), Debbie Lisle (HAPP), Mihalis Mathioudakis (M&P), Kieran McLaughlin (EEECS), Niall McLaughlin (EEECS), Paul Miller (EEECS), Ryan Milligan (M&P), Ciara Rafferty (EEECS), Karen Rafferty (EEECS), Austen Rainer (EEECS), Neil Robertson (EEECS), Sakir Sezer (EEECS), Stuart Sim (M&P), John Topping (SSESW), Gareth Tribello (M&P), Chris Watson (M&P), and David Wilkins (M&P).
-
Smart Home, Smart Decisions: Establishing accountability in algorithmic security systems
First Supervisor: Sandra Scott-Hayward, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: John Morison, School of Law
Thematic area: AI, Social Justice and Public Decision-making
Subject area: Machine learning, Cyber Security, Networking, Law 
Project Overview
Today’s home hosts a myriad of smart devices connecting to the Internet and to each other to provide us with a convenient, always-connected environment with access to entertainment, shopping, and social interaction. However, we now find ourselves home schooling, receiving digital healthcare, and working from home such that the home becomes part of critical national infrastructure. It is, therefore, critical not only to secure this resource but to consider the direct, real, and substantive consequences of security solutions that leverage emerging technologies.
Homeowners generally rely on their Internet Service Provider (ISP) to maintain the availability of the network, and to offer a basic level of network security. Network intrusion detection and prevention systems (NIDS/NIPS) are a key tool in the network operator’s defence strategy. Emerging technologies such as Software-Defined Networking (SDN), Network Functions Virtualization (NFV), and Multi-Access Edge Computing (MEC) combined with the advances in Machine Learning (ML) techniques have enabled innovation in network security. For example, in the home environment, device fingerprinting can be used in combination with micro-segmentation to apply security policies dependent on the level of trust in the connecting device and/or a NIDS classification of the device as malicious. This can all be supported at the edge of the network in the Network Home Router/Hub.
The concept of algorithmic governance has been established along with concerns about the use of facilitated decision-making systems clustered around the decision-making processes and outputs. The governance of ML-based home security systems raises similar concerns meriting analysis. What are the implications of an incorrect classification from a ML-based home  security system? Who is accountable if you are locked out of your home, if you are prevented from working? Furthermore, a range of responses can be generated from the security system; the device may be blocked from the network temporarily or permanently, the device may be quarantined pending user input, or a further automatic analysis could be applied. For each different level of response, what evidence should the ML system supply e.g., explanation, or presence of a malicious pattern extending for a specific duration etc.? How might this level of evidence be influenced by legal requirements e.g., to respond to a challenge to the system decision? If the threshold for evidence is not reached, what is the response? Human decision-making? How might we accommodate user feedback or institute a reasonable appeals process? Can we adapt principles underlying the accountability for reasonableness framework [1] to establish ground rules for fair process in decision-making involving ML-NIDS?
The focus of this research project will be to explore these questions from the system perspective to establish accountability in algorithmic security solutions. For example, to explore the design of the system to support the provision of evidence for classification decisions.
This project may be completed within three or four years, depending on the applicant’s experience.
[1] Daniels, Norman. "Accountability for reasonableness: Establishing a fair process for priority setting
is easier than agreeing on principles." (2000): 1300-1301Additional Information
This project aligns with the primary supervisor’s activities with the IEEE P2863 Working Group on Algorithmic Governance of AI and Polymath Fellowship in the Global Fellowship Initiative at the Geneva Centre for Security Policy (GCSP).
The student will have access to state-of-the-art network facilities and cyber range in the Centre for Secure Information Technologies (CSIT), Belfast.
- Responsible Algorithms for Platform Cooperativism
First Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Philosophy and Politics
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI, Politics
Project Overview
There has been an abundance of “platforms” in the modern era. These platforms, which include services like Uber, JustEat and Amazon, establish a computing platform (website, mobile apps, and protocols) to facilitate the sale of goods and services. Most platforms that we are familiar with operate on a profit-oriented model, where the commitment to shareholders to improve profits trump over other considerations. This plays a role in designing algorithms that decide the price for sellers, the matching between buyers and sellers, and remuneration for gig-workers who ‘partner’ with the platform. This design-for-profit often works out in a way that there is significant detriment to remuneration for gig-workers, and the gig-economy that these platforms power has been criticized for significant drop in real wages and working conditions, and, for reducing the workers’ power to raise collective concerns in implicit ways.
There have been recent alternatives to platform capitalism, prominent among them being platform cooperatives. These platforms are cooperatively owned by the workers, and democratically governed by the one-person-one-vote principle. Cooperatives have been around in traditional sectors for quite a long time, and engage significant numbers of people, especially in countries in the global south (e.g., India). Participants in the cooperative model are motivated by giving-back, altruism and a sense of duty (apart from financial benefit considerations), and these are natural to expect in geo-localized rural co-operatives such as diary cooperatives. Platform cooperatives seek to reinvent this model to work effectively within the context of the web.
Our focus within this project is on identifying the overarching considerations that make platform cooperatives different from platform capitalism and develop enabling algorithms that focus on the particular needs of platform cooperatives. First, we will investigate key algorithms (e.g., matching, remuneration) within platforms and scrutinize their design for suitability for usage within platform cooperatives. For example, to what extent are profit motives being privileged over gig-worker’s considerations, and how much is the size and shape of algorithm outcomes beneficial to the shareholder vis-à-vis other stakeholders. Second, we will re-design some of the algorithms by re-orienting them towards encouraging gig-worker cooperation (rather than competition being the sole focus). This would involve embedding pragmatic fairness considerations in each task handled by algorithms, such as matching, remuneration and reputation. Third, we will, towards the end of the project, work closely with a platform cooperative (links to be built during the earlier part of the project) towards piloting these algorithms.
Besides being a technical project focused on algorithm development, this has a strong social, human and political dimension, and we expect to build key algorithms to enable cooperatives within the digital space that is currently dominated by high-tech for-profit companies.
References
Sophie Atkinson, “‘More than a job’: the food delivery co-ops putting fairness into the gig economy”, The Guardian, May 2021
Platform Cooperativism Consortium, https://platform.coop/
Wikipedia Article on Platform Cooperativism listing several platform cooperatives, https://en.wikipedia.org/wiki/Platform_cooperative
Rochdale Principles, a set of ideals for the operation of cooperatives. Ref: https://en.wikipedia.org/wiki/Rochdale_Principles
- The Psychological Consequences of Perceived Algorithmic Injustice
First Supervisor: Gary McKeown, School of Psychology
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Thematic area: AI, Social Justice and Public Decision-making
Subject area: AI and Humans: Reckoning and Judgement
Project Overview
Increasingly decisions about people’s lives are being made by algorithms. Companies now exist that offer automated recruitment interviews that use emotion recognition software to judge people’s emotional intelligence. Similar emotion recognition algorithms are also being used to monitor prison populations in the criminal justice system of some countries. Online reviewers are assessed for the veracity, sentiment and value of their reviews, and there are attempts to automatically detect the personality characteristics of a reviewer. Reviews can be gathered from formal reviews websites and communities or scraped from social media as is typical with sentiment analysis companies. This project seeks to assess the impact of algorithmic assessment on humans through a series of psychological experiments, in the laboratory and where possible in real world settings. There is considerable debate in the world of emotion psychology concerning the degree to which we understand the nature of people’s emotions (Feldman Barrett et al., 2019). However, despite the lack of consensus within psychology about how we should interpret linguistic and social signals for their emotional meaning, the technological and computer science domains feel little compulsion to address the complexity or nuances of the arguments within psychology. A typical approach within the world of affective computing is to choose a psychological theory “off the shelf” and implement that theory uncritically. Theories that allow simple classification are preferred over more complex theoretical approaches as classification algorithms are much easier to implement within typical supervised machine learning paradigms. In assessing the sentiment of reviews and attributing personality characteristics to reviewers there is more common use of regression-based techniques, but often they are making assessments from a small amount of evidence and minimal context. In both contexts decisions that influence people’s lives can be based on weak theoretical foundations.
This project seeks to experimentally manipulate the style of algorithm, between classification and continuous dimensional assessment, and the degree to which the assessments match the human understanding of their own behaviour using classic experimental psychology paradigms. In the first two experiments the provision of feedback from mock automated job interviews in both services sector employment and graduate level employment will be manipulated. The feedback will either reflect or deviate from psychometric scores provided by the participants as part of the mock interview process. The degree of satisfaction or dissatisfaction with both the feedback and the providers of that feedback will be used as the experimental measures Additionally, there will be evaluations of the automated assessment process in terms of perceived agency, level of influence, level of frustration and trust in the process on the part of the participants. Two further experiments assess automated reviews on both reviewers and participants engaged in services style employment. The first experiment will provide feedback to reviewers of products, through automated assessment of their reviews; the feedback will provide a quality score for the review, a score of the emotional tone and its distance from other reviewers and a judgement of the reviewer’s personality ostensibly based on the reviews but drawing from psychometric measures to manipulate the degree to which the judgements concur with the psychometric personality scales. The second experiment will engage participants as workers who have been reviewed using an automated judgement system manipulating the review to be congruent or incongruent with performance. Two final experiments will seek to move these findings from the laboratory and into the field addressing the issues with companies who engage or are seeking to engage in this kind of automated assessment of people.
- An analysis of citizens-science in the detection, selection and labelling of human rights violations and criminal events
First Supervisor: Teresa Degenhardt, School of Social Sciences, Education and Social Work
Second Supervisor: Meg Schwamb, School of Mathematics and Physics
Thematic area: AI, Social Justice and Public Decision-making
Subject area: Interdisciplinary
Project Overview
Citizens science and crowdsourcing encompasse a wide range of practices related to very different tasks and actors (Estelles Arolas and Gonzalez Ladron de Guevara, 2012: 197). They enable the enrolment of volunteers to undertake specific tasks. Increasingly international prominent NGOs such as Amnesty International use crowd sourcing to verify information from social media on abuses of human rights, extra judicial killing, war crimes world wide. Citizens science represents a new interesting form of complex detection, selection and labelling processes by which humans in different positions/locations and different background enable and contribute to the verification of data. These assemblages of actors involve algorithms, private individuals, police forces and private actors enrolled to determine the suspicious nature of a person. This project aims to investigate how data is assembled by different individuals in collaboration with algorithms, by using the case study of Zooniverse.
Technologies to assess location, sounds, images and their veridicy are really important tools now in the fight against human rights violations, in criminal investigations, and in a number of other settings. Daily snapshot of information from satellite, Youtube, Twitter, Facebook, Instagram, can be assessed by private actors and individuals with the aim of detecting human rights violations and criminal actions. These new varied sources of information and data raise important questions around the labelling by individuals in the loop who are called to evaluate events that may constitute serious criminal events and constitute the basis for prosecution.
Main Research Questions
- How does a crowd source classification system operate?
- How do scientists decide on the production of the algorithm and its training for the task?
- How do people contribute to the process of determining the content of new images and data?
- How do different actors operate for the coding?
Methodology
The student will select to conduct a case study to understand how scientists operate during their development stage of these new mechanisms.
To provide context for the main themes of the PhD research programme, the student will also engaged in a computational component providing hands-on experience interpreting crowd-sourced classifications in order to efficiently and effectively sift through large datasets. Built upon the Zooniverse project builder platform (http://www.zooniverse.org), the Planet Four projects (Planet Four, Planet Four: Terrains, and Planet Four: Ridges; http://www.planetfour.space) are three online citizen science projects. The Planet Four projects engage over ~150,000 volunteers worldwide in interpreting imagery from spacecraft in orbit around Mars. Planet Four is creating unprecedented wind map of the southern pole of Mars. Planet Four: Terrains is studying the distribution of carbon dioxide jets on the Martian South Pole, and Planet Four: Ridges is exploring the distribution of polygonal ridges across the Martian mid-latitudes. The student will have the opportunity to understand how the process works and interview some of the people involved in that.
Ethics
A process of ethical approval will be followed on the requirements of the school the project will be based in.
This is available as a 4-year funded PhD or 3-year PhD thesis, depending on the level of interdisciplinarity covered by the project/student.
Additional Information
Students with previous contacts and collaborations with NGOs and actors involved in crowdsourcing are preferable.
- Humans & Artificial Intelligence in Unmanned Aerial Systems: a dangerous relationship?
First Supervisor: Teresa Degenhardt, SSESW, School of Social Sciences, Education and Social Work
Second Supervisor: Vishal Sharma, School of Electronics, Electrical Engineering and Computer Science
Thematic area:Science, Governability and Society
Subject area: Interdisciplinary
Project overview
As per reportocean.com [1], the global Unmanned Aerial Vehicles market will reach USD 54.2 billion by 2027 and is expected to impact the GDP growth of the UK by 2%. UAVs have become an integral part of autonomous aerial cyber-physical systems with their wide range of operations and ease of deployment. UAVs collectively with ground controllers, command and control centre form the Unmanned Aerial Systems (UAS). These systems rely heavily on artificial intelligence solutions for autonomous and non-autonomous applications in search, reconnaissance, and rescue operations. Due to the sensitive nature of the involved AI, these systems need to be carefully governed and checked for cyber threats.
UAS involves human interventions when feeding in the mission data, obtaining imagery, or taking manual control over the UAVs. Such activities rely on the oversight of humans who can check on their functionality and the appropriateness of some actions given the level of destruction that UAVs can cause. UAVs are increasingly considered problematic in relation to their destructive ability which may be automated and issues have been raised on the quantity and quality of the information upon which they act, as well as in relation to the issue of redress in cases of fallacy. This project aims to understand how AI and humans operate within the UAV system. What position do humans have in the decision chain and how do they appraise their working in conjunction with AI? What constitutes a threat? How are actors, both humans and AI determining when to intervene and how do they do so?
This project considers AI solutions to securing these complex processes. In the mission-critical ecosystem, it is required to understand the framework that manages the determination of rogue behaviour and the interaction of humans and UAS. This project seeks to understand whether AI may intervene in these critical settings. As it is important to explore humans’ understanding of this process before developing security functionalities, this project will first require a social science component. Connecting social dimension with the interfaces and the operational AI makes this project innovative and interdisciplinary as well as timely
References
1. Global Unmanned Aerial Vehicle (UAV) Market Size study, https://reportocean.com/ last accessed November 4, 2021.
2. Feng, L., Wiltsche, C., Humphrey, L. and Topcu, U., 2016. Synthesis of human-in-the-loop control protocols for autonomous systems. IEEE Transactions on Automation Science and Engineering, 13(2), pp.450-462.
3. Marra, William C., and Sonia K. McNeil. "Understanding the loop: regulating the next generation of war machines." Harv. JL & Pub. Pol'y 36 (2013): 1139.
4. Wall, T. and Monahan, T. (2011) “Surveillance and violence from afar: the politics of drones and liminal security scapes”, Theoretical Criminology 15 (3): 239-254.
5. Degenhardt, T. (2015) “Crime, Justice and the Legitimacy of Military Power in the International Sphere,” Punishment and Society, Vol. 17, No. 2 (2015): 139-162.
-
Assessing and Developing Trustworthy AI
First Supervisor: Hans Vandierendonck, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: John Morison, School of Law
Thematic Area: Science, Governability and Society
Subject Area: Computer Science
Project Overview
Artificial intelligence (AI) is increasingly applied in technological products, science and government. Many forms of AI, in particular data-driven approaches such as deep learning, are subject to a number of conditions that raise questions on their robustness and trustworthiness. To counter this, the European Union has produced the Assessment List for Trustworthy AI (ALTAI), a list of considerations and requirements that, when applied during the deployment of AI, should lead to “cutting-edge AI worthy of our individual and collective trust”. The goal of this PhD project is in first instance to test the practical application of ALTAI on a problem in machine learning (for instance, recommendation systems) and to qualitatively and quantitatively analyse the extent to which application of ALTAI will achieve its intended goal. Secondly, the project aims to design metrics, algorithms and processes that make it easier to conform to the ALTAI. Additionally, the project can explore the legal and regulatory framework, particularly around recommendation systems, and assess the sufficiency of the ALTAI approach and the possibilities of improving upon it, in light of a range of overlooked groups and factors whose perspectives are missing from the high level European approach.
There are options to connect this project with Trustpilot, who are heavily invested in ensuring trustworthiness of their AI.
- Security, Privacy and Governance in AI-assisted Unmanned Aerial Surveillance
First Supervisor: Vishal Sharma, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Mike Bourne, School of History, Anthropology, Philosophy and Politics
Third Supervisor: John McAllister, Electronics, Electrical Engineering and Computer Science
Thematic Area: Science, Governability and Society
Subject Area: Governance, Cybersecurity and Surveillance
Project Overview
An Unmanned Aerial System (UAS) has a huge role in military and civilian applications where government agencies and private organizations can use these for surveillance of national and industrial infrastructure, peacekeeping, crowd-management, and border security. PWC [1] predicted that by 2030, the drone’s industry would impact £42bn on the UK economy with £16 billion in net cost savings. It would involve around 70k+ UAVs operating at a time, with one-third in public sectors and 600K+ jobs as included in the Government’s response to Commercial and recreational drone use in the UK [2].
UAS incorporates several technologies such as aerial imaging, communication, and data analytics to perform surveillance and reconnaissance, enabled through a set of complex AI algorithms, offering more than ‘first point view’. Government agencies rely on expert advice to form guidelines on the operations of aerial vehicles, and those guidelines must be assured when UAS are in operations. It is worth noting that surveillance by UAS does not offer people a choice to understand or agree with the way their data will be used and stored [3] [4].
AI-assisted UAS raise a range of issues of surveillance, privacy, freedom, and security. The emerging control ecosystem has evolved to incorporate legal principles, design principles like privacy-by-design [5] and security-by-design, operational guidelines, and a wide range of stakeholders and agencies. This project seeks to understand how the challenges of AI-assisted UAS get parsed into technical and legal domains, which issues get resolved through design and which through regulation? How do these combine? And most importantly, how do the tensions between them get settled or distributed? It further amalgamates with the investigation of engineering aspects of ethical AI to understand how the design principles of security and privacy can be deployed for UAS surveillance and algorithmic evaluations of the underlying AI to resolve the data-islands dilemma onprivacy and security. These research questions make this project interdisciplinary, having cybersecurity and political problems surrounding surveillance with unmanned aerial systems.
References
1. PWC (2018). Skies without limits: Drones – taking the UK’s economy to new heights, pwc.co.uk/dronesreport, [last accessed on October 26, 2021].
2. Commercial and recreational drone use in the UK: Government Response the Committee’s Twenty-Second Report of 2017–19 [HC 2021], SO. No. 152, Science and Technology Committee, available via http://www.parliament.uk.
3. Bakir V. Freedom or Security? Mass Surveillance of Citizens. InHandbook of Global Media Ethics 2021 (pp. 939-959). Springer, Cham.
4. Shalhoub-Kevorkian N, Shalhūb-Kīfūrkiyān N. Security theology, surveillance and the politics of fear. Cambridge University Press; 2015 May 28.
5. Bu F, Wang N, Jiang B, Liang H. “Privacy by Design” implementation: Information system engineers’ perspective. International Journal of Information Management. 2020 Aug 1;53:102124.Additional Information
This project may be completed within three or four years, depending on the applicant’s experience.
-
Emerging algorithmic technologies in legal practice and the representation of legal knowledge
First Supervisor: Ciarán O’Kelly, School of Law
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: Science, Governability and Society
Project Overview
Legal practice is now going through a period of profound change in terms of the kinds of tools available not only for business management but also for retrieving and indeed for engineering knowledge. This has developed far beyond simply searching data bases for legislation and precedents. Algorithms and AI are the raw materials in a huge and developing Legal Tech industry internationally which ranges across all aspects of legal practice from case and practice management across to machine learning applications, and indeed into algorithmic decision-making. This project explores this developing area and focuses in on how exactly these technologies model the wider legal process and all its complex relationships, and how they might alter current thinking and practices.
The project aims to develop understandings of how legal tech attempts to interact and capture the range of practices that are involved in developing legal knowledge within the practice of law. It will look at how legal practice interacts with a range of ways of representing knowledge. How such understandings are shaped and used emerges from legal ontologies, that is how ideas, facts and things are represented within institutions and in action. Such ontologies interact with the recording and retrieval tools legal practitioners use while they do their jobs. Tools both answer to and speak to how knowledge is imagined, how methods are devised and put in place to manage what is known and perhaps even how knowledge is engineered.
The role of analogy in legal reasoning, to take a longstanding example, has seen practitioners devise archival practices, methods for retrieval and information management tools so that precedents and analogies can be incorporated into institutional memories in general and brought to the attention of specific practitioners when required. The project will consider if the new legal tech is attempting to replicate this essentially social practice or whether, alternatively, it conceptualises legal reasoning in an entirely different way. In other words, legal technologies may not simply reproduce previous human ontologies, but might innovate in how they reveal patterns, articulate insights, and identify risks. They may not merely record and retrieve information but may change what is known and what it means to know. Automated contract review tools, to offer another example, might not simply deliver efficiency savings to clients but may revise the parameters of knowledge in legal practice.
Overall, the project will focus on legal technologies and will ask whether technological change demands attention to legal ontologies. Do we need to attend to what it means to ‘know’ in legal practice in order to understand technological innovation? Or does technological change simply entail more efficient approaches that sit comfortably within existing ontological frameworks? And how is the practice of law bound up in the objects and technologies through which lawyers work?
Other Relevant information
The project builds on funded research already on-going in the Law School, in conjunction with partners in the Republic of Ireland. It also draws upon a range of contacts within the legal professions in Northern Ireland and beyond. The addition of expertise from EECS will provide an important additional element.
- Harm in the use of Open sources Intelligence (OSINT) for policing purposes
First Supervisor: Teresa Degenhardt, School of Social Sciences, Education and Social Work
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: Science, Governability and Society
Project Overview
Surveillance tech and the use by the police- cameras are increasingly inbuilt in our life: some are spread in cities, some in shops and corporate stores, some are personally available in mobile phones, and on social media. These cameras are becoming increasingly smart, and connect data to the internet, in ways that we are unaware of, following people’s movement and creating people’s profiles. People are effectively tracked in their movement in ways that were not possible before. Through fusion systems data is passed on to various institutions and they are used for policing purposes. News reports suggest data is used by predicting and profiling technologies to investigate cases by police departments in the US and in the Netherland. The data recorded is not always precise and correct, both the pictures of people and their connection to data cannot be taken as reliable evidence, yet they are effectively utilised with harmful effects. These mechanisms are producing harm connecting people to specific histories, creating risk profiles of individuals, sometimes of a young person, and sometimes of activists. How can we devise mechanisms that prevent the sharing of data from open sources beyond our control? This study aims to collect stories of harm of people selected on the basis of shared video footage in connection with data, creating histories of behaviour that are contested by the people identified. These stories will then be used to define what are the issues that emerge from an appraisal of the harm they were subjected to and devised to think about policies of control.
- AI and Humans: Reckoning and Judgement
First Supervisor: Professor Cathal McCall, School of History, Anthropology, Philosophy and Politics
Second Supervisor: Dr. Sandra Scott-Hayward, School of Electronics, Electrical Engineering and Computer Science
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: Science, Governability and Society
Project Overview
In The Promise of Artificial Intelligence: Reckoning and Judgement (MIT, 2019) Brian Cantwell questions the intelligence of Artificial Intelligence (AI). He contrasts the spectacular reckoning ability of AI with the expansive human capacity for judgement based on ethical, economic, political, and social concerns.
Political responses to crises provide a rich empirical landscape in which to observe the interplay between AI reckoning and human judgement. AI is now embedded in those responses. AI’s reckoning abilities, through data processing and pattern recognition, are of undoubted value in times of crisis. However, the human judgement of political leaders is crucial for truly intelligent responses. Those responses require ethical, economic, social and, ultimately, political judgements to address the myriad of complex and difficult challenges to be addressed.
The project seeks to interrogate the AI reckoning and human judgement nexus. Whenever a political leader proclaims ‘“trust the data” the door opens for the researcher to examine trust in AI’s reckoning ability, and in the judgement of political leaders.
The project can examine this interplay in state specific and/or crisis contexts.
- Algorithmic Decision Making and Online Content Moderation: Devising Technological and Regulatory Solutions to Preserve Fundamental Rights Online
First Supervisor: Giancarlo Frosio, School of Law
Second Supervisor: Oluwafemi Olukoya, School of Electronics, Electrical Engineering and Computer Science
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: Science, Governability and Society
Project Overview
Content moderation online, including the moderation of intellectual property infringing content, defamatory content, dangerous and hate speech, child pornography and abuse, or online disinformation have been increasingly dealt through Automatic Content Recognition (ACR) technology, automated filtering and other algorithmic means, such as in the case of (1) Google Content ID or Vimeo Copyright Match, (2) complex AI-powered technology deployed by online e-commerce platforms, such as Alibaba or eBay, to tackle online counterfeit and piracy, or (3) other technologies deployed by online platforms to filter hate speech, dangerous speech, extremist content, and child pornography online, such as PhotoDNA. As Calabresi-Coase’s ‘least-cost avoiders’, Online Service Providers’ (OSPs) will inherently try to lower transaction costs of adjudication and liability and, to do so, might functionally err on the side of overblocking. In particular, content recognition technologies might not properly distinguish between infringing content and legitimate content/uses, producing considerable false positives and causing chilling effects on online speech. OSPs’ regulatory choices—occurring through algorithmic tools—can affect profoundly the enjoyment of users’ fundamental rights, such as freedom of expression, freedom of information, right to privacy and data protection, and due process.
The goal of this PhD project is in first instance to test to which extent false positives occur in online content sanitisation, also depending on the subject matter to be moderated. For this task, measurement papers, industry telemetry and publicly available reports may be leveraged to obtain a realistic estimate of the false positive rates (FPR). However, further representative measurement studies will be required to correctly estimate the domain-specific prevalence of FPRs in online content sanitisation for the subject matter under consideration. Second, the project aims at understanding whether algorithmic decision-making applied to content moderation online can adequately distinguish between legal and illegal content, or instead can only distinguish between ‘manifestly illegal’ and ‘manifestly legal’ content. For this task, the algorithm, metrics, tools, and processes will be evaluated for completeness, consistency, and soundness in distinguishing between different types of content and the surrounding context. In this context, this project would like to investigate the availability of metrics, algorithms and processes that can help moderating illegal (or harmful) content, rather than merely ‘manifestly illegal’, and fulfil obligations such as those imposed, for example, by art 17 of Directive 2019/790/EU of 17 April 2019 (imposing an obligation on certain user-generated content platforms to deploy proactive monitoring and filtering or so-called ‘upload filters’ to make unavailable any copyright infringing content on their networks) or by both the UK Online Safety Bill and the recently approved EU Digital Services Act (providing obligations for platforms to set up systemic risk assessment and mitigating measures in relation to the design of algorithmic systems in connection to the impact of their services on the exercise of fundamental rights and against the risks of certain kinds of online harms). For this task, AI-enabled automation will be leveraged for completeness checking of rules, terms of service, community guidelines and other policy documents of online platforms against the relevant regulations. Finally, this project might explore the legal and regulatory framework regarding obligations and safeguards against the abuse of automated online content moderation that might impinge on fundamental rights, such as (1) devising transparency and non-discrimination requirements for algorithms, data sets used to train them and the decision-making process; (2) defining requirements for the application in decision-making process of the ‘human-in-command’ principle; (3) articulating standards for subjecting automated removal and blocking mechanisms to external audits; (4) establishing clear accountability, liability and redress mechanisms to deal with potential harm resulting from the use of automated decision-making.
For more information please view the LINAS Virtual Information Session video below held on 9 November 2022.