Smart Home, Smart Decisions: Establishing accountability in algorithmic security systems
Applications are now CLOSED
Today’s home hosts a myriad of smart devices connecting to the Internet and to each other to provide us with a convenient, always-connected environment with access to entertainment, shopping, and social interaction. However, we now find ourselves home schooling, receiving digital healthcare, and working from home such that the home becomes part of critical national infrastructure. It is, therefore, critical not only to secure this resource but to consider the direct, real, and substantive consequences of security solutions that leverage emerging technologies.
Homeowners generally rely on their Internet Service Provider (ISP) to maintain the availability of the network, and to offer a basic level of network security. Network intrusion detection and prevention systems (NIDS/NIPS) are a key tool in the network operator’s defence strategy. Emerging technologies such as Software-Defined Networking (SDN), Network Functions Virtualization (NFV), and Multi-Access Edge Computing (MEC) combined with the advances in Machine Learning (ML) techniques have enabled innovation in network security. For example, in the home environment, device fingerprinting can be used in combination with micro-segmentation to apply security policies dependent on the level of trust in the connecting device and/or a NIDS classification of the device as malicious. This can all be supported at the edge of the network in the Network Home Router/Hub.
The concept of algorithmic governance has been established along with concerns about the use of facilitated decision-making systems clustered around the decision-making processes and outputs. The governance of ML-based home security systems raises similar concerns meriting analysis. What are the implications of an incorrect classification from a ML-based home  security system? Who is accountable if you are locked out of your home, if you are prevented from working? Furthermore, a range of responses can be generated from the security system; the device may be blocked from the network temporarily or permanently, the device may be quarantined pending user input, or a further automatic analysis could be applied. For each different level of response, what evidence should the ML system supply e.g., explanation, or presence of a malicious pattern extending for a specific duration etc.? How might this level of evidence be influenced by legal requirements e.g., to respond to a challenge to the system decision? If the threshold for evidence is not reached, what is the response? Human decision-making? How might we accommodate user feedback or institute a reasonable appeals process? Can we adapt principles underlying the accountability for reasonableness framework  to establish ground rules for fair process in decision-making involving ML-NIDS?
The focus of this research project will be to explore these questions from the system perspective to establish accountability in algorithmic security solutions. For example, to explore the design of the system to support the provision of evidence for classification decisions.
This project may be completed within three or four years, depending on the applicant’s experience.
 Daniels, Norman. "Accountability for reasonableness: Establishing a fair process for priority setting
is easier than agreeing on principles." (2000): 1300-1301.
This project aligns with the primary supervisor’s activities with the IEEE P2863 Working Group on Algorithmic Governance of AI and Polymath Fellowship in the Global Fellowship Initiative at the Geneva Centre for Security Policy (GCSP).
The student will have access to state-of-the-art network facilities and cyber range in the Centre for Secure Information Technologies (CSIT), Belfast.
For more information about the LINAS Doctoral Training Programme, including eligibility criteria and how to apply, please visit:
Deadline for applications: 31 January 2023 at 4.00pm