LINAS Postgraduate programme
Leverhulme Interdisciplinary Network on Algorithmic Solutions (LINAS)
The LINAS Doctoral Training Programme (DTP) seeks to develop a cohort of Doctoral Scholars who can address the implications of massive-scale data processing, artificial intelligence (AI) and machine learning (ML) for both the actual operation of algorithmically driven public decision-making in wider society, and within science and engineering.
Together the rapidly evolving approaches to data make up what have been termed ‘enigmatic technologies’, where authority is concealed behind algorithms. Within the social science domain, algorithmically driven public decision-making, challenges the role of human agency and politics, human rights law and principles of transparency and accountability. For science and engineering, there is a challenge to the traditional scientific governing principles of transparent working and reproducibility.
LINAS brings together legal scholars, social scientists, physical scientists, mathematicians, computer scientists and engineers to develop a distinctive cohort of doctoral students working across the boundaries of their own disciplines. Our ambition is to support the development of integrated, effective, scientifically rigorous and socially responsible algorithmic solutions.
It is hosted by The Senator George J. Mitchell Institute for Global Peace, Security and Justice in collaboration with the Schools of Law, Electronics, Electrical Engineering and Computer Science, Mathematics and Physics, History, Anthropology, Philosophy and Politics and Social Sciences, Education and Social Work.
The Scholarship includes:
- Full-time postgraduate research tuition fees at Standard UK Rates (not yet set for 2021/22). The standard tuition fee for 2020/21 was £4,407 for one year of study
- A maintenance award at the Research Councils UK national rate
- Research Training and Expenses
- Three or four year studentships are available
Eligibility
Scholarships covering full tuition fees and stipend are available to UK and Republic of Ireland students.
EU and other international students (excluding Republic of Ireland nationals living in GB, NI or ROI) are very welcome to apply. Such students will be required to fund their own fees at the Queen's University Belfast international fee rate (£17.k or £21.4k per annum), but LINAS can provide both the maintenance award (approximately £15.3k per year) and a contribution to the fee of £4.5k per year.
Project priority themes are:
Artificial Intelligence, Social Justice and Public Decision-Making
- Algorithmic accountability: Public control of AI decision-making
- Legal and social-legal challenges around new technology
- AI and Humans: Reckoning and Judgement
- Fairness in Exploratory Data Analytics: The agency shift from humans to AI
Science, Governability and Society
- Robotic eyes on the sky and on earth
- ML-enhanced quantum information processing
- Numerically Robust and Reproducible Deep Learning
- Algorithmic security: Distributed Agency, Responsibility, and Governability
Projects
-
Open proposal for applicant development
Thematic area: AI, Social Justice and Public Decision-making
Subject area: LINAS Themes
Project overview
As AI becomes more sophisticated, we will witness algorithms and machines making use of huge data sets, including personal data, in decisions relating to medicine and healthcare, law and government, finance, city planning and even within military arenas. The ethical, legal, political and sociological aspects of living with machines that have AI algorithms that are allowed to operate independently requires careful investigation. In parallel, the question of how we maintain reproducibility and accountability within the scientific process requires novel approaches as we face AI algorithms driving scientific discovery in unexpected directions. We must ensure that these scientific results are both reproducible and repeatable when the data sets employed are so immense that they cannot be independently published.
Applicants are invited to consider very closely the themes of the LINAS Doctoral Training Programme and offer a proposal for research that will investigate both the social justice dimensions and the implications for scientific method of an algorithmic future in which artificial intelligence (AI) impacts on a key area of science and society.
Additional Information
Applicants are advised to make early contact with one of the members of the LINAS team listed below in order to discuss and develop a proposal that fits within both themes of LINAS and adopts its interdisciplinary approach.
LINAS Team:
- John Morison: Professor of Jurisprudence. He has research interests in algorithmic government,autonomous decision-making systems and Smart Cities.
- Stephen Smartt FRS: Professor of Astrophysics. He works in automated surveys of the sky, applying algorithms and machine learning to digital images.
- Sandra Scott-Hayward is a Lecturer in Network Security with research interests in SDNFV security, predictive analytics for network security and performance optimized security implementation.
- Muiris MacCarthaigh is a Senior Lecturer in Politics and Public Administration and co-director of the CITI-GENS DTP. His interests include the increasing role of technology in all aspects of public sector governance.
- Hans Vandierendonck is a Reader in HPC, Director of Postgraduate Research in EEECS, and co-director of the SCIDM DTP. His research interests include systems design for high-performance analytics and the precision and accuracy of algorithms.
- Roger Woods is a Professor in Digital Systems, Research Director of Data Science in ECIT and the Principal Investigator for Kelvin-2 HPC. His research interests are in high performance computing systems, data analytics and wireless systems.
- Mike Bourne is a Reader in International Security Studies and Director of Graduate Studies in HAPP. His research interests relate to the intersections of technology and politics in security issues such as border control, armed violence and arms control, and surveillance.
- Deepak Padmanabhan is a Lecturer in EEECS and an expert in machine learning with interests in the ethics and fairness for data-driven artificial intelligence.
- Mauro Paternostro is Professor of Quantum Information Science. He works in the formulation of a framework for machine learning-enhanced quantum information processing.
- Teresa Degenhardt is a Lecturer in SSESW and Director of the ESRC linked Masters in Social Research. She works on criminological theory in the international sphere, technology development and border security, surveillance, conflict, migrant detention.
- Ciarán O’Kelly is a lecturer in the School of Law. His research is on accountability in public and private organisations. He is especially interested in how human rights instruments and frameworks are put to work in corporate social responsibility. He is co-chair on the permanent study group on Quality and Integrity of Governance in the European Group on Public Administration.
- Cathal McCall is a Professor in HAPP with research interests in Border Studies, cross-border cooperation, conflict and conflict transformation.
- Meg Schwamb is a Lecturer in Astrophysics. Her expertise is in big data for planetary astronomy, focusing in particular on applying citizen science/crowdsourcing to mine large astronomical and planetary science datasets.
Or
Extended Supervisory Team:
Michelle Butler (SSESW), Ernst De Mooij (M&P), Hastings Donnan FBA (Mitchell Institute), Alessandro Ferraro (M&P), Alan Fitzsimmons (M&P), Chongyan Gu (EEECS), Jinguang Han (EEECS), David Jess (M&P), Anna Jurek-Loughrey (EEECS), Ayesha Khalid (EEECS), Debbie Lisle (HAPP), Mihalis Mathioudakis (M&P), Kieran McLaughlin (EEECS), Niall McLaughlin (EEECS), Paul Miller (EEECS), Ryan Milligan (M&P), Ciara Rafferty (EEECS), Karen Rafferty (EEECS), Austen Rainer (EEECS), Neil Robertson (EEECS), Sakir Sezer (EEECS), Stuart Sim (M&P), John Topping (SSESW), Gareth Tribello (M&P), Chris Watson (M&P), and David Wilkins (M&P).
-
Machine and algorithm driven discovery in big data: Implications for scientific reproducibility
First Supervisor: Stephen Smartt, School of Mathematics and Physics
Second Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Philosophy and PoliticsThematic area: Science, Governability and Society
Subject area: Machine learning, algorithms and applications; Political Science, Public Administration
Project overview
Science has entered an era in which we can survey the whole sky across many wavelengths simultaneously using ground-based instruments and space-based satellites. In the past many discoveries of new astrophysical phenomena were made with single experiments. We are now witnessing discoveries being made by combining data, in real time, using AI and machine learning algorithms. This project will develop new algorithms and apply them to new data from all-sky projects that the Astrophysics Research Centre has unique access to. The project will involve searching for new and unrecognised patterns in multi-dimensional data through automated algorithms that are based on machine learning techniques.
The project is interdisciplinary in nature and will also address questions about artificially-derived knowledge and its public governance by examining the question of how to publish and publicly release both discoveries and code versions of this work. Indicative questions include: How do scientists reproduce a machine’s discovery? How are large multi-dimensional data sets published and released to the world to facilitate reproducibility? How far can we train algorithms to make discoveries that scientists would have done in the past, and who gets the credit for the discovery? If computer code alerts a scientist to a new phenomenon, who then made the discovery – the owner of the hardware, the code writer, or the handle turner?
This is available as a 4-year funded PhD in which a significant publication will be composed on the ethical aspects of machine learning in scientific discovery. A 3-year PhD is also available in which the ethical aspects are expected to be a chapter in the completed PhD thesis.
A degree in mathematics, physics, software engineering or computer science is required. Previous astrophysical experience and undergraduate modules are not required, and some academic LINAS engagement with social science modules would be an advantage. The student will be trained in the ethical impacts of using machines and algorithms to make scientific discoveries and how accountable and reproduceable these algorithms should be.
Additional Information
The primary supervisor’s in engaged in two NASA funded sky surveys with telescopes in Hawaii – the Pan-STARRS surveys and the ATLAS project. His Institute is a lead partner in the UK’s effort to build software infrastructure to exploit data from the Rubin Observatory’s survey that will start producing data in 2023. The Institute is also a major partner in several projects constructing the next generation of very large spectroscopic surveys, at the world leading European Southern Observatory. We combine both experimental data and physical models of exploding stars in high dimensional data cubes. Both of these data rich projects require a combination of data analysis and physical modelling through computer algorithm, machine-learning and AI based tools. The successful candidate will also be welcome to engage with projects concerning large scale public policy data analysis in respect of COVID-19 with which the second supervisor in involved.
- The use of Crowd-sourcing to detect human rights violations and criminal events: complex labelling processes enabled by new technological development.
First Supervisor: Teresa Degenhardt, School of Social Sciences, Education and Social Work
Second Supervisor: Meg Schwamb, School of Mathematics and PhysicsThematic area: AI, Social Justice and Public Decision-making
Subject area: Interdisciplinary
Project overview
Crowdsourcing encompasses a wide range of practices related to very different tasks and actors (Estelles Arolas and Gonzalez Ladron de Guevara, 2012: 197). It enables the enrolment of volunteers to undertake specific tasks. Increasingly, international prominent NGOs such as Amnesty International use crowd sourcing to verify information from social media on abuses of human rights, extra judicial killing, war crimes world-wide, but crowd sourcing is also used by policing forces in conjunction with private actors. This project aims to investigate how data is assembled by different individuals in collaboration with algorithms, by using some case studies on Zooniverse.
Technologies to assess location, sounds, images and their veracity are really important tools now in the fight against human rights violations, in criminal investigations, and in a number of other settings. Daily snapshots of information from satellite, Youtube, Twitter, Facebook, Instagram, can be assessed by private actors and individuals with the aim of detecting human rights violations and criminal actions.
These new sources raise important questions around the labelling by individuals and machines of events that may constitute serious criminal events and constitute the basis for prosecution.
Main Research Questions:
How does a crowd source classification system operate?
How do scientists decide on the production of the algorithm and its training for the task?
How do people contribute to the process?
How do these different actors operate for the coding?
What sort of ‘knowledge’ do they use for the process?
How are different sources of assessment combined in the classification process by the algorithm?Methodology
The student will select to conduct a case study to understand how scientists operate during their development stage of these new mechanisms.
To provide context for the main themes of the PhD research programme, the student will also engage in a computational component providing hands-on experience interpreting crowd-sourced classifications in order to efficiently and effectively sift through large datasets. Built upon the Zooniverse project builder platform (http://www.zooniverse.org), the Planet Four projects (Planet Four, Planet Four: Terrains, and Planet Four: Ridges; http://www.planetfour.space) are three online citizen science projects. The Planet Four projects engage over ~150,000 volunteers worldwide in interpreting imagery from spacecraft in orbit around Mars. Planet Four is creating unprecedented wind map of the southern pole of Mars. Planet Four: Terrains is studying the distribution of carbon dioxide jets on the Martian South Pole, and Planet Four: Ridges is exploring the distribution of polygonal ridges across the Martian mid-latitudes. The student will have the opportunity to understand how the process works and interview some of the people involved in that.
Ethics: a process of ethical approval will be followed on the requirements of the school the project will be based in.
This is available as a 4-year funded PhD or 3-year PhD thesis, depending on the level of interdisciplinarity covered by the project/student.
Additional Information
Through some proposed partnership, the student may be able to develop a fully social sciences based proposal on the topic above to determine how different actors, such as NGOs or Police Departments, may train their citizens/researchers and what are the issues and concerns they encounter in the process. The data will then be used to feed back to scientists involved in the development of crowd sourcing classification.
- Holding Government Algorithms and AI to Account
First Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Philosophy and Politics
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer ScienceThematic area: AI, Social Justice and Public Decision-making
Subject area: Public Administration, Computer Science
Project overview
Rapid and uncoordinated growth in the use of AI and algorithms for government decision-making is a topic of increased concern and interest for policy practitioners and scholars alike. For example, the use of algorithms to make decisions concerning educational grade awards in the context of COVID-19 has come in for much recent attention and criticism. However, concerns over the use of AI for predictive policing and judicial decision-making are not new, with particular emphasis on the issues of bias and ethics. Although ‘algorithmic governance’ is a relatively new concept to capture these developments, the practice of using algorithms and AI speaks to classical themes of democratic accountability and transparency in government decision-making (Olsen 2017), but demand they be addressed in new ways.
This PhD project seeks to build on recent work on by Busuioc (2020) and Yeung and Lodge (2019) which starts to unpack how best governments might develop such ways and means of addressing the complex issue of accountability for machine-based decision-making in the public sector. To achieve this, the project proposes an interdisciplinary approach that combines the primary supervisor’s expertise in academic public administration with the computer science-based expertise of the secondary supervisor. It opens the opportunity to examine the use of algorithms and artificial intelligence in a variety of public policy domains, and in respect of a different challenges such as regulation and safeguards in algorithmic decision-making; balancing outcome legitimacy with oversight of machine-based policy inputs, and the role of bureaucratic discretion and expertise in this new era of AI-based decision-making.
Proposals are welcome that address these and/or associated challenges.
References
Busuioc, M. (2021) ‘Accountable artificial Intelligence: Holding Algorithms to Account’ Public Administration Review
Olsen, J.P. (2017) Democratic Accountability, Political Order, and Change (OUP)
Yeung, K. and Lodge, M. (eds) (2019) Algorithmic Regulation (OUP)
- Can business and human rights frameworks shape accountability for AI in information ecosystems?
First Supervisor: Ciarán O’Kelly, School of Law
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer ScienceThematic area: AI, Social Justice and Public Decision-making
Subject area: Algorithmic accountability
Project overview
This project is focused on how human rights frameworks might govern emerging innovations in text and other media generation algorithms. Algorithmic artefacts (produced for instance by deep learning algorithms like GPT-3 [2] or by deepfakes) are approaching a point where automatically generated content can pass as human-made, with potential for use in applications from journalism to business reporting to literature and visual media. Beyond that, editorial decisions might be directed in part with reference to social media algorithms.
Such language models are conceptually straightforward at heart. Fed with text, the algorithm produces more of the same type. The potential for abuse is also apparent, however. Indeed, a previous generation of GPT-3 was not fully released for public use because it was ‘too dangerous,’ not least in its potential for manipulating already fragile information ecosystems across political and social spheres [3].
In such circumstances it may be that approaching AI from an ‘ethics’ standpoint, or treating accountability as a kind of transparency (so, against ‘black box’ algorithms) does not go far enough. Instead, recognising that AI agents in information ecosystems present human rights risks, governance frameworks are required that rely on and negotiate regulatory and other accountability relationships between states, private actors, and others.
This project engages with such innovations and their risks by asking whether states and AI innovators might negotiate their relationships through global business and human rights frameworks [7, 8].]
The multiplicity of expectations [1, 6] that characterise accountability relationships between social actors, articulated in business and human rights through a ‘three pillars’ framework (state duties to protect; business responsibilities to promote and access to remedies for breaches) may offer more concrete paths towards social governance of AI than ‘ethics’ frames do [4, 5]. Conversely, such frameworks may privilege expert and/or technocratic solutions to the detriment of democratic engagement with technology’s impact on society.
The project will develop an understanding of how accountability relationships can form in or around algorithmic text and other media generation, specifically when it comes to their emerging roles in information ecosystems. The project will ask how social relationships are mediated and possibly reconfigured in a social environment where information is algorithmically generated to an increasing extent.
Do business and human rights frameworks offer a path beyond the technocratic, especially in determining the scope and purposes of algorithmic pipelines and information regimes? Can they help shape innovations around not only how algorithms are put to work but around how information itself is generated and shaped, especially in a ‘post-truth’ environment. Can accountability ever be ‘computable’ in this arena and to what degree must innovation be negotiated within the context of relationships between public actors and private actors, including in avoiding causing or contributing to human rights impacts?
This project is available on a 3 or 4 year basis. The 4 year project would include an expectation for a student with a social science or humanities background to enhance their technical competencies towards completion of their project and for all students to complete a paper towards publication in a peer reviewed journal.
Support
The student would join a vibrant PhD community in the School of Law and within that would join with existing students and colleagues in the business and human rights group within the School. Relevant colleagues include Dr. O’Kelly, Dr. Ciara Hackett and Dr. Luke Moffett. A number of PhD researchers are already engaged in considering for instance “acknowledgement and denial in corporate reporting” and “market-driven stewardship in corporate governance”.
References
[1] Bovens M, The Quest for Responsibility: Accountability and Citizenship in Complex Organisations (Cambridge University Press 1998).
[2] Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, and others, ‘Language Models Are Few-Shot Learners’ [2020] arXiv:200514165 [cs].
[3] Just N and Latzer M, ‘Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet’ (2017) 39 Media, Culture & Society 238.
[4] Kriebitz A and Lütge C, ‘Artificial Intelligence and Human Rights: A Business Ethical Assessment’ (2020) 5 Business and Human Rights Journal 84.
[5] Mantelero A, ‘AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment’ (2018) 34 Computer Law & Security Review 754.
[6] Romzek B and Dubnick MJ, ‘Accountability in the Public Sector: Lessons from the Challenger Tragedy’ (1987) 47 Public Administration Review 227.
[7] Organisation for Economic Co-operation and Development, ‘OECD Guidelines for Multinational Enterprises’ (OECD, 2011).
[8] UN Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework (United Nations 2011).
- Fair Exploratory AI
First Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Philosophy and PoliticsThematic area: AI, Social Justice and Public Decision-making
Subject area: Computer Science, Politics, Public Administration
Project overview
Governments and the public sector are heralding transformations in data collection. One of the most important transformations is the shift in focus from active data collection, where citizens fill out forms, to passive data collection whereby data is collected through sensors and surveillance devices. Passive data is increasingly acquired through Internet of Things sensors in smart city infrastructure, WiFi, website and app usage tracking, responses to behavioural experiments used in public services (e.g., footfall patterns in response to changes in public library opening times), and a variety of other devices such as healthcare wearables, surveillance cameras and traffic sensors.
Such passive data are often ‘unlabelled’ i.e., not manually annotated with its status for a task of interest. For example, surveillance camera footage does not come with labellings of ‘suspicious patterns’. This makes exploratory data analytics the natural way to analyse this data. These include techniques for segmentation (i.e., grouping into clusters, each represented by a prototypical data) and anomaly detection (e.g., identifying which citizens are suspicious enough to be subjected to stop-and-frisk checks), amongst others. While Fair AI for analytics over labelled data has seen great advances, Fair Exploratory AI is in its infancy. Under this umbrella, we will examine both: (i) Identifying and translating normative principles from literature in fairness and justice based on their applicability towards regulating exploratory AI into regulatory frameworks, and (ii) Embedding fairness and related values into the design of algorithms so their impacts are demonstrably and quantifiably adherent to such values. PhDs in this area will explore issues in the intersection of political science and AI.
Proposals are welcome that address these and/or associated challenges.
-
Advancing Algorithmic Transparency in a Data Driven World
First Supervisor: Dr Blesson Varghese, School of Electronics, Electrical Engineering and Computer Science and ECIT
Second Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Philosophy and PoliticsThematic area: Science, Governability and Society
Subject area: Computer Science, Political Science
Project overview
Forbes predicts that by 2025 over 80 billion devices, such as smartphones and wearables will be connected to the Internet. Consequently, 180 trillion gigabytes of data will be generated that will be considered as sensitive. Mass data scandals in which user data has been harvested and used unethically has been rife in the last decade creating an era of ‘data xenophobia’. Therefore, processing data and making decisions based on such data is not only technically challenging, but also one that raises important and fundamental social and ethical questions.
In this context, the focus of this project is the concept and practice of transparency and how it applies to the classic underlying algorithms (such as those involving machine learning) that are used to process data need to be examined. Therefore, the following two intertwined themes that sit at the intersection of computer science and society will be explored in the project:
Beyond transparency: Existing real-world practices provide a veneer of transparency. They allow a user to know who has access to their personal data while assuming that the algorithms which processed them and influenced subsequent decisions on their behalf are transparent. In addition, there is usually no consideration given to where intermediate data generated while processing user data that may also be sensitive is stored. This project challenges the norm by asking fundamental questions such as: How can a user track their data and erase dependencies to their data? How can a user choose where their data is and cannot be processed? How can a user negotiate subsequent choices arising from processing their data?
Beyond classic algorithms: Insight from data has been derived for more than three decades by using machine learning algorithms, such as neural networks. These depend on ‘black-box’ type models that usually raise concerns of transparency. The fundamental issue is that machine learning algorithms are inherently not transparent, and they can only be considered as an afterthought. This project aims to use radically different ways of deriving insight from data, such as by using the Tsetlin machine, a novel alternative to machine learning that relies on propositional logic and is inherently transparent.
The questions posed above will be articulated and developed in the context of edge computing, a paradigm that seeks to process data near the user instead of on the cloud.
Subject to review by the LINAS management committee, this project may be funded as a 4-year PhD in which a significant publication will be prepared. Otherwise, it will be funded as a 3-year PhD.
Additional Information
This project directly aligns with the Royal Society Industry Fellowship of Dr Varghese hosted by British Telecommunications plc (BT) on trust and accountability in edge computing. The project also aligns with the aims of the newly established Edge Computing Hub at the University funded by Rakuten Mobile, Japan. Additional partnerships will be developed by the supervisors during the course of the PhD project.
The PhD student will have opportunities for internships at both BT and Rakuten providing opportunities to interact with multiple teams within these organisations and explore the outcomes of the project on realistic testbeds.
- Security and Responsibility: Emergence, decision, and distributed agency.
First Supervisor: Mike Bourne, School of History, Anthropology, Philosophy and Politics
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer ScienceThematic area: Science, Governability and Society
Subject area: Algorithmic security: Distributed Agency, Responsibility, and Governability
Project overview
This PhD will explore questions of responsibility amid the ‘mess’ of security politics and practice. As security action is increasingly automated, it can be seen as emergent in assemblages of diverse actants. What happens to questions of responsibility when security decisions (to kill, to exclude, to protect, to identify threat) are emergent properties of assemblages?
This project will seek to explore the challenge of responsibility beyond legal accountability. Rather than seeking to solve this challenge through the political engineering of legal accountability for error, it will engage the deeper encoding of power relations in multiple dimensions. These may include the corporeal, spatial, temporal, racial, or sensory/aesthetic reconfigurations of security among the machines. With a focus on one (or more) of these dimensions, proposals are invited to engage in ‘empirical theorising’ through a relevant case that may include: Autonomous Weapons Systems; Smart cities and sensor infrastructures; border control; or others. Proposals should set out the theoretical and empirical orientation to be adopted to engage the ‘cognitive assemblages’ of security action. They should engage the emergence of responsibility concerns in these assemblages across the practice of data-driven AI from development to implementation. How is responsibility understood and worked with in each stage? How are the dilemmas settled, deferred, or distributed?
- When does one become dangerous? Exploring the use of algorithms and machine learning in the process of detection of dangerousness on the basis of previous data.
First Supervisor: Teresa Degenhardt, School of Social Sciences, Education and Social Work
Second Supervisor: Prof. Hans Vandierendonck, School of Electronics, Electrical Engineering and Computer ScienceThematic area: AI, Social Justice and Public Decision-making
Subject area: Interdisciplinary
Project overview
This project aims to consider how the element of uncertainty at a mathematical level, that is used to condense data, may increase and amplify the biases of the data upon which the algorithm is called to decide and reach a conclusion on the dangerousness of an individual. It aims to expose and make sense of a selection of algorithm’s decisions by looking at what factors it had taken into consideration. Fundamentally, it seeks to contribute to considerations on a model of accountability for algorithmic decisions.
Machine learning is increasingly applied in decision-making situations in industry and government. These technologies are relatively immature and are susceptible to a number of problems, whose consequences on society must be understood, e.g., the quality of data on which the model is based, the mathematical model and also deployment issues that may affect the decision-making. For this reason, it is important to explore the logics inscribed within different technologies as a way of controlling these powerful new tools. The increased collection of private data and the profiling of individuals according to specific propositions with the aim of predicting the future are even more troubling when the profiling of individuals induce to social sorting people on the basis of their past education or indeed deviant behavior. When does one become a dangerous individual?
Algorithms may draw on data such as age, gender, marital status, location, facial recognition, movement tracing, history of substance abuse, history of internet use, school attendance or criminal record, area, to predict who is likely to be involved or is involved in criminal activity, or who may be considered a suspected terrorist. As data accumulates over time, assessments change. We are interested in looking at how to devise a system by which the algorithm may ‘provide an account’ of themselves to public scrutiny, so that factors that have affected the decision-making may be considered for further scrutiny. Is it possible to devise such a mechanism in a machine so that the data related to that decision can be subjected to evaluation from human subjects, and thus disregarded in case of bias or faulty judgment. This project is interdisciplinary in nature, involving a scientific component linked to the improvement of algorithms’ explainability, and a sociological one looking at how the use of machine learning impact on the process of selection of the dangerous individual.
Research Questions
How does the imprecision necessarily engrained in an algorithmic structure contribute to unjust decision or just ones?
How/What are different data involved in the selection of attributes to be scrutinized during the process of training the machine? Is it possible to improve the structure of the learning machine? Is it possible to account for the context in which a specific data comes from, and discount some data?
How does the algorithmic selection change the process of discretion engrained in policing? How can we make sense of such a multiple involvement of data, agents and causes in the selection of attributes responsible for decisions on the dangerousness of an individual?
Methodology
Empirical observation and analysis of a selected sample of attributes selected by machine-learned algorithms; interviewing data scientists, machine learning experts and key stake holders on the inaccuracies and associated uncertainties in the decision making process; analysis of what methods and techniques are used to maximize accuracy in decision making.
Ethical considerations will be attended to according to the main school of reference.
This is available as a 4-year funded PhD or 3-year PhD thesis, depending on the level of interdisciplinarity covered by the project/student.
- AI and Humans Versus a Microbe
First Supervisor: Cathal McCall, School of History, Anthropology, Philosophy and Politics
Second Supervisor: Sandra Scott-Hayward, School of Electronics, Electrical Engineering and Computer ScienceThematic area: AI, Social Justice and Public Decision-making
Project overview
In The Promise of Artificial Intelligence: Reckoning and Judgement (MIT, 2019) Brian Cantwell questions the intelligence of Artificial Intelligence (AI). He contrasts the spectacular reckoning ability of AI with the expansive human capacity for judgement based on ethical, economic, political, and social concerns.
Political responses to the COVID-19 pandemic provide a rich empirical landscape in which to observe the interplay between AI reckoning and human judgement. AI is embedded in the scientific effort to combat COVID-19. Its reckoning abilities, through data processing and pattern recognition, are of great value. However, the human judgement of political leaders is crucial for truly intelligent responses to the real human world devastation caused by a microbe. Political responses to the pandemic may well be based on ‘the science’ - problematised somewhat by differing scientific opinions on that response - but they also require ethical, economic, social and, ultimately, political judgements to address the myriad of complex and difficult problems caused.
The project seeks to interrogate the AI reckoning and human judgement nexus. In light of the rich empirical landscape provided by political responses to the COVID-19 pandemic, the project may be territorially or sectorally focused. For example, territorially, it may focus upon the divergent responses to COVID-19 on the divided island of Ireland or, sectorally, it may analyse the interplay between reckoning and judging in deciding how and when to return to the workplace, the school, or the university.
- Secure Environments, Legally Compliant Environments, Safe Environments: Understanding the dynamics for confidence with algorithmic decision-making
First Supervisor: John Morison, School of Law
Second Supervisor: Sandra Scott-Hayward, School of Electronics, Electrical Engineering and Computer ScienceThematic area: AI, Social Justice and Public Decision-making
Subject area: Governance, secure computing
Project overview
The way in which decisions are made across a whole range of applied contexts in wider society is changing rapidly and radically. Algorithmic decision-making is becoming familiar as techniques around big data, cloud storage, data mining, pattern recognition, machine learning, datafication, dataveillance, and personalisation, allow massive data sets to be gathered in a volume, velocity and variety that makes conventional forms of analysis impossible, and new algorithmic forms possible. The Internet of Things, as enabled by 5G, gathers information, in real time and continuously, from a huge range of everyday objects and activities to provide a further impetus for ever increasing volumes of data to be used within a range of decision-making systems that can regulate activities and manage behaviour in a whole variety of social activities. New machine learning techniques involving sophisticated algorithms developed in a bottom-up process are being introduced to interrogate this data and allow bulk data ingestion into processes where evermore connections and relations are being mapped and manipulated, sorted and filtered, and searched and prioritised.
All of this directs and supports a de-personalised form of decision-making. This is causing some consternation. This is manifested differently across various disciplines. Within some aspects of computer science and engineering the problem is about ensuring that decision-making environments are secure against error, malicious attack and criminal activity. For lawyers, the problems are around compliance with various regulatory regimes for data protection, privacy, transparency and human rights. More widely across the social sciences and humanities, problems appear around equality and discrimination, fake data and nudging and political manipulation, and the wider social consequences of removing both the personal and political from the way in which decisions are taken in both private and commercial contexts and by public authorities.
This project proposes to take three case studies - in areas relating to social media, commerce and government decision-making - to explore in depth what ‘secure’, ‘legally compliant’ and ‘safe’ actually mean within each example, how these ideas are related to one another, and how confidence in machine-led decision-making might be developed.
This project could be undertaken by someone with a law or social science background or a computer science approach – combined with a willingness to explore the other discipline and its concerns.
This project may be completed within three or four years, depending on the applicant’s experience. If a three-year project is preferred, it may be possible to reduce the number or range of case studies to be developed.
-
Machine learning-enhanced diagnostics of open quantum networks
First Supervisor: Alessandro Ferraro, School of Maths and Physics
Second Supervisor: : John McAllister, School of Electronics, Electrical Engineering and Computer Science
Third Supervisor: Mike Bourne, School of History, Anthropology, Philosophy and PoliticsThematic area: Science, Governability and Society
Subject area: Physics, Quantum Information
Project overview
The tracking of open quantum dynamics – i.e. the evolution resulting from the interaction between a system and its surrounding world – is a key challenge for the development of quantum technologies. On one hand, no quantum system can be considered as completely isolated from the environment around it. This makes open quantum dynamics the best suited theoretical framework for the interpretation of experimental data and the formulation of predictions. On the other hand, it is widely accepted that the interaction with an environment depletes the non-classical properties of a system, driving it towards a mundane classical description. The characterisation of how this process is driven is of paramount importance for the design of strategies aimed at counteracting such a quantum-to-classical transition, and thus to deliver more robust quantum technologies. Moreover, it is key for the understanding of what sets the quantum and classical world apart, and thus the harnessing of any potential technological advantage provided by quantum features.
However, such characterisation is also very difficult. It requires an accurate knowledge of how many different parts or elements of an environment are coupled to the system of interest, and how strongly. Such information is aptly contained in the so-called environmental spectral density (ESD), whose knowledge is crucial for the prediction of the temporal behaviour of open quantum systems. Full reconstruction of the ESD is typically prevented by the fundamental inaccessibility of environmental systems, in particular in condensed matter and solid-state platforms. In fact, it is often the case that ESDs are “conjectured” on the basis of reasonable expectations and assumptions. Yet, even small deviations of the conjectured form of ESD from its actual behaviour could lead to significant discrepancies between the predicted and actual dynamics of the open quantum system.
The scope of this project is to design novel approaches to the diagnosis of ESD in open quantum systems. The methodological step-change entailed by this proposal lies in the combination of machine learning techniques and sophisticated theoretical methods for the analysis of open quantum dynamics. First, we will make use of the so-called reaction coordinate framework to single-out the relevant part of the environment coupled to a given quantum system. This will be instrumental to an accurate description of the dynamics of the system, and thus the provision of key information for the application of a supervised learning method aimed at the identification of the best suited ESD compatible with the observed dynamics.
The student developing this project will acquire key and relevant knowledge on key techniques, from open quantum dynamics to machine learning, from quantum optics to information theory. Such hybrid expertise will provide them with a unique toolbox of instruments that will be expendable towards the theoretical characterisation of real-world quantum devices.
The characterisation of the quantum-to-classical transition is believed to depend on the quality of the information that the elements of the environment acquire over the state of the system itself. When different elements of the environment agree on such information, the state of the system emerges as an element of “objective reality” and becomes, as such, a classical entity. The research to be developed in this project will thus offer a chance to address how information gathered by a group of observers allows for the emergence of objective reality in a broad context that could well extend to the dynamics within (relational and societal) networks. The availability of new methods for the characterisation and the control of open quantum dynamics will impact substantively on the development of noise-resilient quantum technologies, including devices for secure quantum-enhanced communication.
Application Process
Within this programme, PhD applicants will work with named researchers across the relevant disciplines on a proposal that fits within the overall programme. Applicants should have a good honours degree (normally at least a 2.1 or equivalent in a relevant subject - which will include physics, maths, computer science as well as law, social science and the humanities), preferably also a Masters qualification, and a strong commitment to interdisciplinary approaches.
Outlines of the Research Proposals available for this year are posted above, including details of the LINAS team. Applicants are strongly advised to make contact with their potential supervisor to develop their ideas about how they would contribute to the proposed project. The application should detail the particular contribution that the potential doctoral scholar would make to the proposed project, how this would fit into the wider themes of the programme and whether the application is for a three or four year award.
A link to the Virtual Information Session posted above gives full details of the LINAS programme. To discuss further please contact us at linas@qub.ac.uk
All applications should be made through the online Application Portal (below) and directed to the School where the Primary Supervisor is based. They must also be clearly marked as LINAS 21 to ensure consideration for funding. Applicants should choose the option in the funding section “I wish to be considered for external funding” and then enter LINAS21 in the free text box which follows.
Closing date for applications: 5 March 2021 at 4.00 pm
Interviews: Week commencing 15 March 2021
Decision to applicants: Week commencing 29 March 2021
Completing your Application
- All applicants must provide an up-to-date CV; this should be uploaded to the Admissions Portal as a separate document*
- All applicants are required to provide a 2000 word (maximum) statement, specifically detailing how they would contribute to the proposed project and the interdisciplinary aspects of the LINAS programme
- Applicants wishing to propose an interdisciplinary PhD topic of their own, that aligns with one or more of the LINAS priority themes, must upload a 2000 word research proposal that describes the topic as a separate document.* This research proposal must clearly identify a potential supervisory team and which of the themes it relates to
- Please note, failure to include the reference LINAS21 in the free text box may result in your application not being allocated or considered for funding.
*Please note that only one document can be uploaded. You must combine your CV and Research Proposal into one document (word or PDF).
LINAS Academic Team:
Programme leadership: LINAS Programme lead and PI : Prof John Morison
AI, Social Justice and Public Decision-Making coordination: Dr Sandra Scott-Hayward
Science, Governability and Society coordination: Prof Stephen Smartt
-
Full LINAS Team
Core Team:
Professor John Morison (Law) j.morison@qub.ac.uk
Professor Stephen Smartt (M & P) s.smartt@qub.ac.uk
Dr Sandra Scott-Hayward (CSIT) s.scott-hayward@qub.ac.uk
Dr Muiris MacCarthaigh (HAPP) m.maccarthaigh@qub.ac.uk
Dr Hans Vandierendonck (EEECS) h.vandierendonck@qub.ac.uk
Professor Roger Woods (ECIT) r.woods@qub.ac.uk
Dr Mike Bourne (HAPP) m.bourne@qub.ac.uk
Dr Deepak Padmanabhan (EECS) d.padmanabhan@qub.ac.uk
Professor Mauro Paternostro (M & P) m.paternostro@qub.ac.uk
Dr Teresa Degenhardt (SSESW) t.degenhardt@qub.ac.uk
Professor Cathal McCall (HAPP) c.mccall@qub.ac.uk
Dr Meg Schwamb (M & P) m.schwamb@qub.ac.uk
Dr Ciarán O'Kelly (Law) c.okelly@qub.ac.uk
Extended affiliated team:
Dr Michelle Butler (SSESW) michelle.butler@qub.ac.uk
Dr Ernst De Mooij (M&P) e.demooij@qub.ac.uk
Prof Hastings Donnan (MI) h.donnan@qub.ac.uk
Dr Alessandro Ferraro (M&P) a.ferraro@qub.ac.uk
Prof Alan Fitzsimmons (M&P) a.fitzsimmons@qub.ac.uk
Dr Chongyan Gu (EEECS) c.gu@qub.ac.uk
Dr David Jess (M&P) d.jess@qub.ac.uk
Dr Anna Jurek-Loughrey (EEECS) a.jurek@qub.ac.uk
Dr Ayesha Khalid (EEECS) a.khalid@qub.ac.uk
Prof Debbie Lisle (HAPP) d.lisle@qub.ac.uk
Prof Mihalis Mathioudakis (M&P) m.mathioudakis@qub.ac.uk
Dr Kieran McLaughlin (EEECS) kieran.mclaughlin@qub.ac.uk
Dr Niall McLaughlin (EEECS) n.mclaughlin@qub.ac.uk
Dr Paul Miller (EEECS) p.miller@qub.ac.uk
Dr Ryan Milligan (M&P) r.milligan@qub.ac.uk
Dr Ciara Rafferty (EEECS) c.m.rafferty@qub.ac.uk
Prof Karen Rafferty (EEECS) k.rafferty@qub.ac.uk
Prof Austen Rainer (EEECS) a.rainer@qub.ac.uk
Prof Neil Robertson (EEECS) n.robertson@qub.ac.uk
Prof Sakir Sezer (EEECS) s.sezer@qub.ac.uk
Dr Stuart Sim (M&P) s.sim@qub.ac.uk
Dr John Topping (SSESW) j.topping@qub.ac.uk
Dr Gareth Tribello (M&P) g.tribello@qub.ac.uk
Prof Chris Watson (M&P) c.a.watson@qub.ac.uk
Dr David Wilkins (M&P) d.wilkins@qub.ac.uk