Top
Skip to Content
LOGO(small) - Queen's University Belfast
  • Our x-twitter
  • Our linkedin
LOGO(large) - Queen's University Belfast

Centre for Intelligent Sustainable Computing

  • Home
  • Research
  • Our People
  • About Us
    • Contact Us
    • Director
  • News
  • Events
  • Collaborate
  • Study
  • Home
  • Research
  • Our People
  • About Us
    • Contact Us
    • Director
  • News
  • Events
  • Collaborate
  • Study
  • Our x-twitter
  • Our linkedin
In This Section

  • Home
  • Centre for Intelligent Sustainable Computing
  • Events

Events

All I Know Is What the Words Know!

Back to events

Exploring the Boundaries and Capabilities of Large Language Models in the Age of ChatGPT

Past Event

All I Know Is What the Words Know - 30 March Event
Date(s)
March 30, 2023
Location
Online via Zoom
Time
14:00 - 17:00
Price
Free
Book now

After years of rapid progress in Artificial Intelligence and Natural Language Processing, ChatGPT has captured the public imagination in a way that is unprecedented for an AI system, reportedly acquiring over 100 million active users within two months of its launch. What are the scientific and societal implications of such models, and how will they impact the language sciences, language technology and society more generally? In this series of talks, experts in AI and the language sciences describe their recent research on the scientific and governance implications of large language models (LLMs). The event will also include a Q & A section, facilitating a more general discussion of the capabilities and impact of LLMs.

 

Talks

The Semantic Competence of Large Language Models

Raphaël Millière, Columbia University

Over the past decade, artificial intelligence has made remarkable advancements, largely due to sophisticated neural networks capable of learning from vast amounts of data. One area where these advancements have been particularly evident is in natural language processing, as demonstrated by the impressive capabilities of Large Language Models (LLMs) like GPT-4 and ChatGPT. LLMs possess the astonishing ability to produce well-structured and coherent paragraphs on a wide array of subjects. Moreover, they can perform a diverse set of tasks, such as summarizing extensive articles, translating languages, answering complex questions, solving elementary problems, generating code for rudimentary programs, and even elucidating jokes. As AI becomes increasingly proficient in language-related tasks, it prompts the question of whether these models truly comprehend the language they process. This inquiry has reignited long-standing philosophical debates concerning the nature of language understanding. However, defining 'understanding' in this context and distinguishing it from consciousness remains a challenge. In this presentation, I will propose to focus on the more specific notion of 'semantic competence,' which refers to the ability to comprehend the meanings of words and phrases and to employ them effectively. I will explore two facets of semantic competence: inferential, which pertains to connecting linguistic expressions with other expressions, and referential, which relates expressions to the real world. Drawing from philosophy, linguistics, and recent research on LLMs, I will argue that these models possess considerable inferential competence and even a limited degree of referential competence. In conclusion, I will contemplate the elements still absent from AI models and consider what steps are necessary for them to achieve a level of semantic competence comparable to that of humans.

Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. He previously completed my DPhil (PhD) in philosophy at the University of Oxford, where he worked on self-consciousness. His interests lie mainly within the philosophy of artificial intelligence, cognitive science, and mind.

Encoding & Decoding Natural Language in fMRI

Alex Huth, University of Texas at Austin

The meaning, or semantic content, of natural speech is represented in highly specific patterns of brain activity across a large portion of the human cortex. Using recently developed machine learning methods and very large fMRI datasets collected from single subjects, we can construct models that predict brain responses with high accuracy. Interrogating these models enables us to map language selectivity and potentially uncover organizing principles. The same techniques also enable us to construct surprisingly effective decoding models, which predict language stimuli from brain activations recorded using fMRI. Using these models we are able to decode language while subjects imagine telling a story, and while subjects watch silent films with no explicit language content.

Alex Huth is an Assistant Professor at The University of Texas at Austin in the departments of neuroscience and computer science. His lab uses natural language stimuli and fMRI to study language processing in human cortex in work funded by the Burroughs Wellcome Foundation, Sloan Foundation, Whitehall Foundation, NIH, and others. Before joining UT, Alex did his PhD and postdoc in Jack Gallant’s laboratory at UC Berkeley, where he developed novel methods for mapping semantic representations of visual and linguistic stimuli in human cortex.

Data governance and transparency for Large Language Models: lessons from the BigScience Workshop

Anna Rogers, IT University of Copenhagen

The continued growth of LLMs and their wide-scale adoption in commercial applications such as chatGPT make it increasingly important to (a) develop ways to source their training data in a more transparent way, and (b) to investigate it, both for research and for ethical issues. This talk will discuss the current state of affairs and some data governance lessons learned from Big Science, an open-source effort to train a multilingual LLM - including an ongoing effort for investigating the 1.6 Tb multilingual ROOTS corpus.

Anna Rogers is an assistant professor at IT University of Copenhagen. After receiving her PhD in computational linguistics from the University of Tokyo, she was a postdoctoral associate in machine learning for NLP in University of Massachusetts (Lowell), and in social data science (University of Copenhagen). She is one of the program chairs for ACL'23, and an organizer of the Workshop on Insights from Negative Results in NLP.

 

Event type
Lecture / Talk / Discussion
Department
Audience
All
Add to calendar
Share
  • Facebook
  • Twitter
  • LinkedIn
  • Weibo
  • Email
All I Know Is What the Words Know - 30 March Event
Book now
Home
  • Home
  • Research
  • Our People
  • About Us
  • News
  • Events
  • Collaborate
  • Study
QUB Logo
Contact Us

Centre for Intelligent Sustainable Computing (CISC)
Computer Science Building
16A Malone Road
Belfast
Northern Ireland
BT9 5BN


Email: cisc-eeecs@qub.ac.uk
Web: www.qub.ac.uk/cisc/

Quick Links

  • Home
  • People
  • Contact Us
  • Jobs

 

© Queen's University Belfast 2024
  • Privacy and cookies
  • Website accessibility
  • Freedom of information
  • Modern slavery statement
  • Equality, Diversity and Inclusion
  • University Policies and Procedures
Information
  • Privacy and cookies
  • Website accessibility
  • Freedom of information
  • Modern slavery statement
  • Equality, Diversity and Inclusion
  • University Policies and Procedures

© Queen's University Belfast 2024

Manage cookies