Linguistic and Epistemological Perspectives on Testimony
Testimony is an essential and indispensable method for acquiring knowledge. Yet understanding the semantics and pragmatics of testimonial utterances and explicating the conditions under which such utterances are reliable—and hence knowledge conferring—is one of the most fundamental problems in philosophy. This problem has both epistemological and linguistic features. But while the problem must be investigated from both of these perspectives, epistemologists and philosophers of language often address it in relative isolation. Our main aim in this project is to integrate current cutting-edge research on the epistemological and linguistic foundations of testimony. We want to consider (a) how current research in philosophy of language and epistemology of testimony may complement each other and (b) how these research programs may be applied to practical societal matters. Our main focus will be cases of public scientific testimony and the proper role of scientific testimony in society. This includes, for example, scientific testimony about the impact of climate change. We will also compare cases of scientific testimony to other cases that concern the nature and impact of testimony on normative issues, e.g. political testimony. The latter cases in particular also raise questions regarding harmful testimony and testimonial injustice which will be a central focus in the latter stages of the project.
Final report
The purpose of this project was to investigate the extent to which theories in philosophy of language and semantics could be used to shed light on foundational issues in epistemology, mainly in the epistemology of testimony. More generally, we wanted to consider how these research programs could be applied to practical societal matters with a primary focus on cases of public scientific testimony and the proper role of scientific testimony in society. This would include, as an example, scientific testimony about the impact of climate change. However, we also wanted to compare cases of scientific testimony to other cases that concern the nature and impact of testimony on normative issues. The latter cases in particular also raised questions regarding harmful testimony and testimonial injustice which would be a focus point in the late stages of the project. Many of these issues were addressed during the course of the project both in the project workshops and conferences and academic publications and presentations at conferences elsewhere. Moreover, project research on some of these issues were presented in the public domain through public talks, panel debates, and podcasts.
Aside from these planned activities, an unexpected turn in the project arose from the extremely rapid and unexpected emergence of AI chatbots run on large language models (LLMs) which suddenly introduced an unexplored and potentially problematic dimension on the issue of the reliability of scientific testimony. Especially from the linguistic side of things, questions that quickly arose were whether the outputs of LLM’s can even be seen as fundamentally meaningful. And from the epistemological side, it became clear that LLM driven chatbots have emerged as a competitor to standard sources of scientific testimony and hence as a method for bypassing experts. While it seems clear that this is, perhaps in the general case, extremely beneficial, the development comes with a number of epistemic risks that suddenly were very relevant to our project. As a result, some of the project members immediately began to integrate these issues in their research. However, given the time lag in academic publishing, many of the results will be forthcoming over the next years. Furthermore, many of the project’s contributions provide an important basis for further research on AI testimony. As such the project exemplifies how foundational research can prove to be important for dealing with rapidly changing technological developments with great societal impact.
On the question of the most significant findings of the project, on the Philosophy of Language/Semantics side, these came toward the tail end of the project (with further results in progress). Initially, we set out to determine the extent to which existing theories of indirect speech and indirect speech reports could be utilized in shedding light on actual practical problems in the epistemology of testimony. These issues were thematized in the project workshops and conferences. Generally, the interactions were fruitful insofar as both epistemologists and philosophers of language were able to raise novel research questions. Given the slow nature of foundational research much of this research remains in progress, but as noted, this research could be extended to address the emergence of AI as we realized that chatbots powered by large language models have already become a kind of scientific testifier that is relied upon both by the general public but also by relevant individuals with significant influence on policy making. We therefore spent a significant amount of time figuring out how these LLMs actually function and then discussing the extent to which the outputs of these LLMs can be considered as actually meaningful and/or reliable. This resulted in the publication ‘The Outputs of LLMs are Meaningless’ (By PI, Anders Schoubye, co-authored with Anandi Hattiangadi, now forthcoming) in which it is argued that the outputs of LLMs are meaningful only to the extent that the user endows meaning to those outputs. This has also led to a number of new research questions that we are currently investigating, such as whether (and if so, how) one can attain knowledge from chatbot “testimony” and what, if anything, can in fact be learned about human cognition from these models.
On the epistemological side of the project, the work especially on science communication has resulted in a number of conceptual questions that were not anticipated. For example, some hypotheses concerning the communication of scientific uncertainty requires empirical testing and, in consequence, the project members are now developing study designs in international collaboration. Furthermore, questions about non-factual scientific testimony – e.g., scientific recommendations by scientific advisory boards – have arisen and are currently being explored and two papers on this topic are now nearing submission for publication. Finally, the application of research results in science journalism during Covid-19 has raised unforeseen issues regarding science communication during a crisis situation. These have both been addressed in academic publications as well as in the public domain. For example, Prof. Gerken served as a scientific advisor on a guide for communicating science that was developed during Covid-19 and distributed to 11.000 journalist and communication professionals during the pandemic.
One somewhat interesting but not particularly surprising result that the project has revealed is how
difficult it is to merge and combine insights from fields that are closely related but have significantly
different aims. As mentioned, the main ambition of the linguistic side of the project was to apply observations and theories from philosophy of language and linguistic semantics as regards indirect speech to practical epistemological problems. However, we came to realise that indirect speech reports are still rather poorly understood and there remains significant disagreements about the correct analysis of these linguistic constructions. The research in philosophy of language and linguistics tends to be quite narrow in its focus and rife with idealizations. In actual practical situations, there are a vast number of additional pragmatic and extra-linguistic factors in play which makes the application of somewhat idealized theory far from perfect. In short, it seems that we need a better understanding of the linguistic foundations before applications can be fruitfully made. So, while one finding is simply the recognition of the complexity of the task, the collaborations during workshops and conferences have yielded some progress in terms of reconceptualizing a number of research questions in a common terminology.
Our research was disseminated in a variety of different ways. Papers were published across a wide spectrum of top tier academic journals and publishers, and research talks were delivered at multiple international venues. Anders Schoubye (PI) presented project related research at City University New York, University of Gothenburg, Uppsala University, the Institute for Future Studies in Stockholm, Birkbeck College in London, and the internal research seminar CLLAM at Stockholm University. Gerken has presented project related material at Stanford University, Munich Center for Mathematical Philosophy, New York University, University of Milan, Glasgow University, Queen’s University Belfast among others. Gerken has also delivered a number of public talks to a broader audience about the role of scientific expert testimony in society. Finally, Anna-Sara Malmgren has workshopped project related material at the University of Texas at Austin. She has also organized public talks and colloquia at her home institution (Inland Norway University) on communication with AI and AI testimony. Multiple papers are in progress and some under review as well.
Aside from these planned activities, an unexpected turn in the project arose from the extremely rapid and unexpected emergence of AI chatbots run on large language models (LLMs) which suddenly introduced an unexplored and potentially problematic dimension on the issue of the reliability of scientific testimony. Especially from the linguistic side of things, questions that quickly arose were whether the outputs of LLM’s can even be seen as fundamentally meaningful. And from the epistemological side, it became clear that LLM driven chatbots have emerged as a competitor to standard sources of scientific testimony and hence as a method for bypassing experts. While it seems clear that this is, perhaps in the general case, extremely beneficial, the development comes with a number of epistemic risks that suddenly were very relevant to our project. As a result, some of the project members immediately began to integrate these issues in their research. However, given the time lag in academic publishing, many of the results will be forthcoming over the next years. Furthermore, many of the project’s contributions provide an important basis for further research on AI testimony. As such the project exemplifies how foundational research can prove to be important for dealing with rapidly changing technological developments with great societal impact.
On the question of the most significant findings of the project, on the Philosophy of Language/Semantics side, these came toward the tail end of the project (with further results in progress). Initially, we set out to determine the extent to which existing theories of indirect speech and indirect speech reports could be utilized in shedding light on actual practical problems in the epistemology of testimony. These issues were thematized in the project workshops and conferences. Generally, the interactions were fruitful insofar as both epistemologists and philosophers of language were able to raise novel research questions. Given the slow nature of foundational research much of this research remains in progress, but as noted, this research could be extended to address the emergence of AI as we realized that chatbots powered by large language models have already become a kind of scientific testifier that is relied upon both by the general public but also by relevant individuals with significant influence on policy making. We therefore spent a significant amount of time figuring out how these LLMs actually function and then discussing the extent to which the outputs of these LLMs can be considered as actually meaningful and/or reliable. This resulted in the publication ‘The Outputs of LLMs are Meaningless’ (By PI, Anders Schoubye, co-authored with Anandi Hattiangadi, now forthcoming) in which it is argued that the outputs of LLMs are meaningful only to the extent that the user endows meaning to those outputs. This has also led to a number of new research questions that we are currently investigating, such as whether (and if so, how) one can attain knowledge from chatbot “testimony” and what, if anything, can in fact be learned about human cognition from these models.
On the epistemological side of the project, the work especially on science communication has resulted in a number of conceptual questions that were not anticipated. For example, some hypotheses concerning the communication of scientific uncertainty requires empirical testing and, in consequence, the project members are now developing study designs in international collaboration. Furthermore, questions about non-factual scientific testimony – e.g., scientific recommendations by scientific advisory boards – have arisen and are currently being explored and two papers on this topic are now nearing submission for publication. Finally, the application of research results in science journalism during Covid-19 has raised unforeseen issues regarding science communication during a crisis situation. These have both been addressed in academic publications as well as in the public domain. For example, Prof. Gerken served as a scientific advisor on a guide for communicating science that was developed during Covid-19 and distributed to 11.000 journalist and communication professionals during the pandemic.
One somewhat interesting but not particularly surprising result that the project has revealed is how
difficult it is to merge and combine insights from fields that are closely related but have significantly
different aims. As mentioned, the main ambition of the linguistic side of the project was to apply observations and theories from philosophy of language and linguistic semantics as regards indirect speech to practical epistemological problems. However, we came to realise that indirect speech reports are still rather poorly understood and there remains significant disagreements about the correct analysis of these linguistic constructions. The research in philosophy of language and linguistics tends to be quite narrow in its focus and rife with idealizations. In actual practical situations, there are a vast number of additional pragmatic and extra-linguistic factors in play which makes the application of somewhat idealized theory far from perfect. In short, it seems that we need a better understanding of the linguistic foundations before applications can be fruitfully made. So, while one finding is simply the recognition of the complexity of the task, the collaborations during workshops and conferences have yielded some progress in terms of reconceptualizing a number of research questions in a common terminology.
Our research was disseminated in a variety of different ways. Papers were published across a wide spectrum of top tier academic journals and publishers, and research talks were delivered at multiple international venues. Anders Schoubye (PI) presented project related research at City University New York, University of Gothenburg, Uppsala University, the Institute for Future Studies in Stockholm, Birkbeck College in London, and the internal research seminar CLLAM at Stockholm University. Gerken has presented project related material at Stanford University, Munich Center for Mathematical Philosophy, New York University, University of Milan, Glasgow University, Queen’s University Belfast among others. Gerken has also delivered a number of public talks to a broader audience about the role of scientific expert testimony in society. Finally, Anna-Sara Malmgren has workshopped project related material at the University of Texas at Austin. She has also organized public talks and colloquia at her home institution (Inland Norway University) on communication with AI and AI testimony. Multiple papers are in progress and some under review as well.