• A
  • A
  • A
  • АБВ
  • АБВ
  • АБВ
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта

26 марта 2025 в 15.00 состоится дискуссия «Роль искусственного интеллекта в улучшении рамочных условий для ответственных исследований и инноваций, RRI» - «Leveraging Artificial Intelligence to Enhance the Responsible Research and Innovation Framework»

Мероприятие завершено

26 марта 2025 года состоится дискуссия «Роль искусственного интеллекта в улучшении рамочных условий для ответственных исследований и инноваций, RRI» - «Leveraging Artificial Intelligence to Enhance the Responsible Research and Innovation Framework» (рус., eng.)

Докладчики: проф. Ван Дачжоу (WANG Dazhou, School of Humanities, University of Chinese Academy of Sciences) и проф. Чжан Чжихуэй (ZHANG Zhihui, Institute of Natural Science History, Chinese Academy of Sciences). Модератор: д.ф.н., доц. Школы философии и культурологии А.В. Михайловский. Участники со стороны НИУ ВШЭ: Л. Вервурт, И.И. Мавринский, И.А. Карпенко, В.Э. Терехович, А.В. Углева. Внешние участники: Н. Г. Багдасарян (МГУ им. Баумана), Тарас Вархотов (МГУ) и др. (ИФРАН, ИНИОН).

Мероприятие проводится на базе Школы Философии и Культурологии НИУ ВШЭ в рамках Научного семинара «Искусственный интеллект в мире людей: гуманистические, этические и правовые аспекты развития цифровых технологий» (рук. А.В. Углева).  

Формат мероприятия - смешанный. Адрес проведения - Ул. Старая Басманная 21/4, ауд. А-309.

Для участия требуется предварительная регистрация.

Shutterstock

The workshop "Artificial Intelligence in the World of People: Humanistic, Ethical and Legal Aspects of Digital Technology Development" by Anastasia Ugleva is held at the School of Philosophy and Cultural Studies of the Faculty of Humanities. The public discussion is important in terms of attracting the attention of both the general public and the professional scientific community to issues of a general humanitarian and especially ethical nature caused by the development of AI, its development, testing and implementation. The participation of foreign philosophers and scientists from the Chinese Academy of Sciences is important in the long term for establishing intercultural and interdisciplinary scientific communication on AI issues, without which it is impossible to take into account the civilizational specifics of the social assessment of technology, the principles of organizing ethical expertise in the field of AI, and ethical and legal collisions. This meets the general framework objectives of the seminar - the formation of a language for professional discussion and the development of codes for converting/translating key concepts.

Wang Dazhou (School of Humanities, University of Chinese Academy of Sciences), Zhang Zhihui (Institute of Natural Science History, Chinese Academy of Sciences), “Leveraging Artificial Intelligence to Enhance the Responsible Research and Innovation Framework”

Abstract. In the governance of emerging technologies, the responsible research and innovation (RRI) approach has grappled with inherent conflicts, notably between inclusivity and agility of the research and innovation process. These challenges persist when applying RRI principles to the development of artificial intelligence (AI). A transformative shift involves viewing AI not merely as the object of ethical governance but as a potent tool for advancing it. Following the discussion of the ethical governance potential of artificial intelligence, it is argued that AI could be used as means to enhance RRI to the AI- based RRI, and that the strategy of AI against AI could be used to further enhance the AI-based RRI, which could simultaneously increase inclusivity and agility, thereby promoting more effective ethical discussion and decision making.

Alexandra Kasakova (University of Chinese Academy of Sciences), “Institutional Trust and Public Attitudes to AI Applications”

Louis Vervoort (School of Philosophy and Cultural Studies, HSE University), “Causes in neuron diagrams, and testing causal reasoning in Large Language Models. A glimpse of the future of philosophy?”

Abstract. We propose a test for abstract causal reasoning in AI, based on scholarship in the philosophy of causation, in particular on the neuron diagrams popularized by D. Lewis. We illustrate the test on advanced Large Language Models (ChatGPT, DeepSeek and Gemini). Remarkably, these chatbots are already capable of correctly identifying causes in cases that are hotly debated in the literature. In order to assess the results of these LLMs and future dedicated AI, we propose a definition of cause in neuron diagrams with a wider validity than published hitherto, which challenges the widespread view that such a definition is elusive. We submit that these results are an illustration of how future philosophical research might evolve: as an interplay between human and artificial expertise.

Alexei Durnev (ITMO University), “Key Aspects of Safety Issues in AI Implementation”

Abstract. The most advanced AI-solutions that are implemented in the frontier areas of research (such as, for example, DeepTech and industrial areas) can be described as an ensemble of models engaged in data gathering, storage and processing. Key characteristics of such models can be described as recurrency and generativity, which enable a model to perform effectively with rather high degree of autonomy. In this light, we elaborate how various safety issues stem from such understanding of AI- models. We also debate what the primary areas of innovation framework are in which these issues should be addressed adequately utilizing conceptions of explainability and distributed responsibility.

Oleg Gurov (State Academic University for the Humanities), “From cyborgization to digital inclusion”

Abstract. In my report, I will concider how the concept of social adaptation is changing in connection with AI, with a focus on RRI: the focus from individual "improvement" - that is, how cyborgization was initially defined - is shifting to the creation of systems that should ensure inclusion, that ensure equality for vulnerable groups - in a wide range of social life, with an emphasis on education and healthcare. And about ethical dilemmas, it is necessary to say: the risks of discrimination and polarization, loss of control over important decisions. And therefore RRI can be a platform for creating mechanisms where technological achievements and social justice will be effectively combined.

Elena Trufanova (Institute of Philosophy of the Russian Academy of Sciences), “Trustworthiness and responsibility as the key issues of the AI application”

Abstract. Two main questions of the present AI application are as follows: 1) what can we trust the AI with? 2) who is responsible for the decisions that we entrusted AI to make? With the rapid development of the generative AI systems in the last couple of years they became a tool everyone wants to apply in different fields – from language translations to the legal decision making and medical design. But the seeming easiness and effectiveness of the AI application in certain problem solving is too tempting, because everyone loves fast and simple solutions. There is also a psychological issue of people trusting the objectivity of the machines over the subjective human nature (despite the known facts of the machine flaws). As a result, the AI can easily become a very common tool of technoсracy, which involves the risk of switching the responsibility for the decisions from humans to the AI systems. The main task of humanitarian expertise in this field should be to draw boundaries of what we entrust the AI to do (both in scientific research and in social sphere) and to define the role of human expert that should approve or disapprove of the AI actions concerning sensitive matters (such as human health and well-being, environmental issues, ethical and legal questions etc.).

Vladimir Vetrov (Primakov National Research Institute of World Economy and International Relations of the Russian Academy of Sciences; Instutute of Philosophy of the Russian Academy of Sciences), “AI in RRI: object, example or tool”

Abstract. The author will analyze the relationship between artificial intelligence and the Responsible Research and Innovation approach. The thesis is put forward about the triple significance of AI in the RRI framework: as an object of consideration, an example of responsible technology, and a tool in decision- making.

For this purpose, the definitions of AI as enshrined in the documents of international commissions, RRI principles and their interaction with stakeholders of the innovation process, as well as their applicability to AI are sequentially considered. Attention is also paid to transparency as one of the RRI principles, which turns out to be problematic due to the device of generative linguistic models and translation/interpretation of human knowledge. The perspectives of AI as an analytical tool in technology policy making are evaluated.

Alexander Mikhailovsky (School of Philosophy and Cultural Studies, HSE University), “Artificial Intelligence Liberalism?”

Abstract. In my talk, I will outline the main topic of my conversation with Carl Mitcham, which took place in personal correspondence with the American and Chinese philosopher of technology in the fall of 2024 and was published in the pages of “Technologos” magazine. Drawing on Aristotle’s Nicomachean Ethics, Mitcham argues that ethics alone is insufficient to address the challenges posed by artificial intelligence, and must be complemented by politics to cultivate human flourishing. Mitcham tries to offer an adequate political philosophy for the ethics of artificial intelligence, but in doing so he doesn’t go beyond five political issues of AI identified as freedom, justice, democracy, power and anthropology. At the same time, he argues that the individualist emphasis is at the core of Euro-American political regimes, which, in the name of equality, resist admitting any hard and ethically or politically consequential distinctions between the many and the few. The social ontology of the modern West prioritizes individual autonomy and freedom, even when trying to talk about the common good. Now, I touch on these limitations of Artificial Intelligence Liberalism, and pose my question: if we want a political philosophy of AI for a multipolar world, should we take into account the difference between сcontinental and Anglo-Saxon philosophy resp. social and ethical thought in Russia and East-Asia? Or the difference between Western and Eastern cultures in their understanding existential values?