<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">rusworld</journal-id><journal-title-group><journal-title xml:lang="ru">Россия и мир: научный диалог</journal-title><trans-title-group xml:lang="en"><trans-title>Russia &amp; World: Sc. Dialogue</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">2782-3067</issn><publisher><publisher-name>НИИРК</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.53658/RW2026-4-1(19)-43-62</article-id><article-id custom-type="elpub" pub-id-type="custom">rusworld-370</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>МЕЖДУНАРОДНЫЕ, ГЛОБАЛЬНЫЕ И РЕГИОНАЛЬНЫЕ ПРОЦЕССЫ. Международные отношения, глобальные и региональные исследования</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>INTERNATIONAL, GLOBAL AND REGIONAL PROCESSES. International relations, global and regional studies</subject></subj-group></article-categories><title-group><article-title>Возможности и ограничения научного использования LLM: первичный анализ данных, проблемы предвзятости и валидация</article-title><trans-title-group xml:lang="en"><trans-title>Capabilities and Limitations in Scientific Application of LLMs: Preliminary Data Analysis, Bias, and Validation</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0001-8372-1193</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Колотаев</surname><given-names>Ю. Ю.</given-names></name><name name-style="western" xml:lang="en"><surname>Kolotaev</surname><given-names>Yu. Yu.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Юрий Юрьевич Колотаев. Кандидат политических наук. Старший преподаватель кафедры европейских исследований</p><p>199034, г. Санкт-Петербург, Университетская наб., 7-9.</p></bio><bio xml:lang="en"><p>Yury Yu. Kolotaev.  CandSc. (Polit.). Assistant Professor, European Studies Department, School of International Relations</p><p>Address: 7-9, Universitetskaya nab., St. Petersburg, 199034</p></bio><email xlink:type="simple">yury.kolotaev@mail.ru</email><xref ref-type="aff" rid="aff-1"/></contrib><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-7118-8632</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Базлуцкая</surname><given-names>М. М.</given-names></name><name name-style="western" xml:lang="en"><surname>Bazlutckaya</surname><given-names>M. M.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Мария Михайловна Базлуцкая. Кандидат политических наук. Исполнительный директор</p><p>194064, г. Санкт-Петербург, пр-кт Раевского, д. 16, литера А, помещ. 5-н.</p></bio><bio xml:lang="en"><p>Mariya M. Bazlutckaya. CandSc. (Polit.). Executive Director</p><p>Room 5-N., 16A, Rayevsky ave., St. Petersburg, 194064</p></bio><email xlink:type="simple">m.bazlutskaya@gmail.com</email><xref ref-type="aff" rid="aff-2"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Санкт-Петербургский государственный университет<country>Россия</country></aff><aff xml:lang="en">Saint-Petersburg State University<country>Russian Federation</country></aff></aff-alternatives><aff-alternatives id="aff-2"><aff xml:lang="ru">Автономная некоммерческая научно-исследовательская &#13;
организация «Координационная лаборатория» (АНО &#13;
«Колаборатория»)<country>Россия</country></aff><aff xml:lang="en">Autonomous Non-Commercial Research Organisation «Coordination Lab» (ANO Colaboratoria)<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2026</year></pub-date><pub-date pub-type="epub"><day>30</day><month>03</month><year>2026</year></pub-date><volume>0</volume><issue>1</issue><fpage>43</fpage><lpage>62</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Колотаев Ю.Ю., Базлуцкая М.М., 2026</copyright-statement><copyright-year>2026</copyright-year><copyright-holder xml:lang="ru">Колотаев Ю.Ю., Базлуцкая М.М.</copyright-holder><copyright-holder xml:lang="en">Kolotaev Y.Y., Bazlutckaya M.M.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://www.russia-world.ru/jour/article/view/370">https://www.russia-world.ru/jour/article/view/370</self-uri><abstract><p>Большие языковые модели (LLM) становятся популярными среди научного сообщества, а их применение все чаще можно встретить в социогуманитарных исследованиях. Представленная обзорная статья обобщает раскрытые возможности внедрения LLM в текстовый анализ данных и систематизирует ограничения, с которыми приходится сталкиваться ученым на этом пути. Авторы обозначают наилучшие «точки входа» в исследования с помощью LLM, но наибольшее внимание уделяется проблемам предвзятости моделей, валидации (проверке) и репликации (воспроизводимости) результатов исследований, в которых использовался ИИ. В работе предлагаются несколько возможных стратегий улучшения качества работы с генеративными моделями в соответствии с процедурой триангуляции. Они включают проверку альтернативных запросов, тестирование разных выборок данных и использование связки моделей. Практика применения LLM в гуманитарных науках показывает, что при грамотной настройке они имеют потенциал к снижению временных издержек, расширяют аналитические возможности ученых и могут содействовать в выявлении скрытых закономерностей в текстовых массивах. Однако эффективность научного применения LLM напрямую зависит от исследовательской осмотрительности, выраженной в понимании границ применимости инструмента, корректности постановки задач, качества исходных данных, а также умении нормализовать неструктурированные данные. Без этих условий использование моделей рискует превратиться в симуляцию научного исследования. Статья призвана стать стартовой точкой для политологов и исследователей международных отношений, заинтересованных в качественном внедрении LLM в свою аналитическую работу.</p></abstract><trans-abstract xml:lang="en"><p>Large language models (LLMs) are becoming increasingly popular within the academic community, and their use is now more frequently observed in social science. This article summarizes the emerging opportunities for integrating LLMs into textual data analysis and systematizes the limitations that scholars encounter along the way. The authors identify the most effective «points of entry» for incorporating LLMs into research, while devoting particular attention to issues of model bias, validation, and the replication of results produced with the AI-assistance. The paper proposes several possible strategies for enhancing the quality of work with generative models in accordance with the triangulation procedure. These include examining alternative prompts, testing various data samples, and employing a combination of models. Existing studies show that, when properly configured, LLMs can reduce time costs, expand researchers’ analytical capacities, and help uncover hidden patterns within large textual corpora. However, the effectiveness of scientific applications of LLMs directly depends on scholarly diligence, including a clear understanding of the tool’s scope, careful problem formulation, high-quality input data, and the ability to normalize unstructured data. Without these conditions, the use of such models risks devolving into a simulation of scientific inquiry. The article is intended to serve as a starting point for political scientists and international relations researchers interested in integrating of LLMs into their analytical work.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>искусственный интеллект (ИИ)</kwd><kwd>социальные науки</kwd><kwd>большая языковая модель (LLM)</kwd><kwd>предвзятость</kwd><kwd>валидация</kwd><kwd>триангуляция</kwd></kwd-group><kwd-group xml:lang="en"><kwd>artificial intelligence (AI)</kwd><kwd>social science</kwd><kwd>large language model (LLM)</kwd><kwd>bias</kwd><kwd>validation</kwd><kwd>triangulation</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Ашихмин Е.Г., Левченко В.В., Селеткова Г.И. Опыт применения больших языковых моделей для анализа количественных социологических данных [Experience in Applying Large Language Models to Analyse Quantitative Sociological Data] // Вестник университета. 2024. № 11. С. 205–215. https://doi.org/10.26425/1816-4277-2024-11-205-215.</mixed-citation><mixed-citation xml:lang="en">Ashikhmin E.G., Levchenko V.V., Seletkova G.I. Experience in Applying Large Language Models to Analyse Quantitative Sociological Data. Vestnik universiteta [Vestnik Universiteta]. 2024; 11:205–215 [In Russian]. https://doi.org/10.26425/1816-4277-2024-11-205-215.</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Базлуцкая М.М., Сытник А.Н. Трансмедийное вовлечение: фрейм-анализ цифровой дипломатии США в России при помощи искусственного интеллекта [Transmedia Engagement: AI-Driven Frame Analysis of the U.S. Digital Diplomacy in Russia] // Россия и мир: научный диалог. 2024. № 4(14). С. 63–85. https://doi.org/10.53658/RW2024-4-4(14)-63-85.</mixed-citation><mixed-citation xml:lang="en">Bazlutckaia M.M., Sytnik A.N. Transmedia Engagement: AI-Driven Frame Analysis of the U.S. Digital Diplomacy in Russia. Rossiya i mir: nauchnyj dialog [Russia &amp; World: Sc. Dialogue]. 2024; (4):63–85 [In Russian]. https://doi.org/10.53658/RW2024-4-4(14)-63-85.</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Игнатьев А.Г. Этико-философские проблемы проектирования искусственного морального агента [Ethical and Philosophical Problems of Designing Artificial Moral Agent] // Этическая мысль. 2024. Т. 24. № 1. С. 87–100.</mixed-citation><mixed-citation xml:lang="en">Ignatev A.G. Ethical and Philosophical Problems of Designing Artificial Moral Agent. Eticheskaya mysl’ [Ethical thought]. 2024; 24(1):87–100 [In Russian].</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Коршунов А., Белобородов И., Бузун Н., Аванесов В., Пастухов Р., Чихрадзе К., Козлов И., Гомзин А., Андрианов И., Сысоев А., Ипатов С., Филоненко И., Чуприна К., Турдаков Д., Кузнецов С. Анализ социальных сетей: методы и приложения [Social Network Analysis: Methods and Applications] // Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 439–456. https://doi.org/10.15514/ISPRAS-2014-26(1)-19.</mixed-citation><mixed-citation xml:lang="en">Korshunov A., Beloborodov I., Buzun N., Avanesov V., Pastukhov R., Chykhradze K., Kozlov I., Gomzin A., Andrianov I., Sysoev A., Ipatov S., Filonenko I., Chuprina Ch., Turdakov D., Kuznetsov S. Social Network Analysis: Methods and Applications. Trudy Instituta sistemnogo programmirovaniya RAN [Proceedings of the Institute for System Programming of the RAS]. 2014; 26(1):439–456 [In Russian]. https://doi.org/10.15514/ISPRAS-2014-26(1)-19.</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Соменков С.А. Искусственный интеллект: от объекта к субъекту? [Artificial Intelligence: from Object to Subject?] // Вестник университета имени О.Е.Кутафина. 2019. № 2(54). С. 75–85. https://doi.org/10.17803/2311-5998.2019.54.2.075-085.</mixed-citation><mixed-citation xml:lang="en">Somenkov S.A. Artificial Intelligence: from Object to Subject? Vestnik universiteta imeni O.E.Kutafina [Courier of Kutafin Moscow State Law University (MSAL)]. 2019; (2):75–85 [In Russian]. https://doi.org/10.17803/2311-5998.2019.54.2.075-085.</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Сысоев П.В., Филатов Е.М. ChatGPT в исследовательской работе студентов: запрещать или обучать? [ChatGPT in Students’ Research Work: to Forbid or to Teach?] // Вестник Тамбовского университета. Серия: Гуманитарные науки. 2023. Т. 28. № 2. С. 276–301. https://doi.org/10.20310/1810-0201-2023-28-2-276-301.</mixed-citation><mixed-citation xml:lang="en">Sysoyev, P.V., Filatov, E.M. ChatGPT in Students’ Research Work: to Forbid or to Teach? Vestnik Tambovskogo universiteta. Seriya: Gumanitarnye nauki [Tambov University Review. Series: Humanities]. 2023; 28(2):276–301 [In Russian]. https://doi.org/10.20310/1810-0201-2023-28-2-276-301.</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Baiburin A., Berezkin Yu., Gromov A., Kovalenko K., Sokolov E., Kovalyova N., Anna Moskvitina A., Shirobokov I., Stanulevich N., Utekhin I., Boitsova O. Artificial Intelligence in the Social Sciences and Humanities // Forum for Anthropology and Culture. 2024. № 20. P. 11–60. https://doi.org/10.31250/1815-8870-2024-20-20-11-60.</mixed-citation><mixed-citation xml:lang="en">Baiburin A., Berezkin Yu., Gromov A., Kovalenko K., Sokolov E., Kovalyova N., Anna Moskvitina A., Shirobokov I., Stanulevich N., Utekhin I., Boitsova O. Artificial Intelligence in the Social Sciences and Humanities. Forum for Anthropology and Culture. 2024; 20:11–60 [In English]. https://doi.org/10.31250/1815-8870-2024-20-20-11-60.</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Bazlutckaia M., Sytnik A., Tsvetkov T., Punchenko P. AI-Assisted Bias Detection of US Digital Diplomacy in Russia (2009–2023): A ChatGPT Approach // International Conference on Human-Computer Interaction. Cham: Springer Nature Switzerland, 2025. P. 189–209. https://doi.org/10.1007/978-3-031-93536-7_14.</mixed-citation><mixed-citation xml:lang="en">Bazlutckaia M., Sytnik A., Tsvetkov T., Punchenko P. AI-Assisted Bias Detection of US Digital Diplomacy in Russia (2009–2023): A ChatGPT Approach. In International Conference on Human-Computer Interaction. Cham: Springer Nature Switzerland, 2025:189–209 [In English]. https://doi.org/10.1007/978-3-031-93536-7_14.</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Bojić L., Zagovora O., Zelenkauskaite A., Vuković V., Čabarkapa M., Jerković S.V., Jovančević A. Comparing Large Language Models and Human Annotators in Latent Content Analysis of Sentiment, Political Leaning, Emotional Intensity and Sarcasm // Scientific Reports. 2025. Vol. 15(1). P. 11477. https://doi.org/10.1038/s41598-025-96508-3.</mixed-citation><mixed-citation xml:lang="en">Bojić L., Zagovora O., Zelenkauskaite A., Vuković V., Čabarkapa M., Jerković S.V., Jovančević A. Comparing Large Language Models and Human Annotators in Latent Content Analysis of Sentiment, Political Leaning, Emotional Intensity and Sarcasm. Scientific Reports. 2025; 15(1):11477 [In English]. https://doi.org/10.1038/s41598-025-96508-3.</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">Braga M., Milanese G.C., Pasi G. Investigating Large Language Models’ Linguistic Abilities for Text Preprocessing // arXiv preprint arXiv:2510.11482. 13 Oct. 2025. https://doi.org/10.48550/arXiv.2510.11482.</mixed-citation><mixed-citation xml:lang="en">Braga M., Milanese G.C., Pasi G. Investigating Large Language Models’ Linguistic Abilities for Text Preprocessing. arXiv preprint arXiv:2510.11482. 13 Oct. 2025 [In English]. https://doi.org/10.48550/arXiv.2510.11482.</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">Brucks M., Toubia O. Prompt Architecture Induces Methodological Artifacts in Large Language Models // PloS one. 2025. Vol. 20(4). P. e0319159. https://doi.org/10.1371/journal.pone.0319159.</mixed-citation><mixed-citation xml:lang="en">Brucks M., Toubia O. Prompt Architecture Induces Methodological Artifacts in Large Language Models. PloS one. 2025; 20(4):e0319159 [In English]. https://doi.org/10.1371/journal.pone.0319159.</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">Calderon R., Herrera F. And Plato Met ChatGPT: An Ethical Reflection on the Use of Chatbots in Scientific Research Writing, with a Particular Focus on the Social Sciences // Humanities and Social Sciences Communications. 2025. Vol. 12(1). P. 1–13. https://doi.org/10.1057/s41599-025-04650-0.</mixed-citation><mixed-citation xml:lang="en">Calderon R., Herrera F. And Plato Met ChatGPT: An Ethical Reflection on the Use of Chatbots in Scientific Research Writing, with a Particular Focus on the Social Sciences. Humanities and Social Sciences Communications. 2025; 12(1):1–13 [In English]. https://doi.org/10.1057/s41599-025-04650-0.</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Chen K., He Z., Yan J., Shi T., Lerman K. How Susceptible are Large Language Models to Ideological Manipulation? // arXiv preprint arXiv:2402.11725. 18 Jun. 2024. https://doi.org/10.48550/arXiv.2402.11725.</mixed-citation><mixed-citation xml:lang="en">Chen K., He Z., Yan J., Shi T., Lerman K. How Susceptible are Large Language Models to Ideological Manipulation? arXiv preprint arXiv:2402.11725. 18 Jun. 2024 [In English]. https://doi.org/10.48550/arXiv.2402.11725.</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">Colonel J.T., Lin B. Word Clouds as Common Voices: LLM-Assisted Visualization of Participant-Weighted Themes in Qualitative Interviews // Proceedings of the Fourth Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+ NLP). 2025. P. 169–177. https://doi.org/10.18653/v1/2025.hcinlp-1.14.</mixed-citation><mixed-citation xml:lang="en">Colonel J.T., Lin B. Word Clouds as Common Voices: LLM-Assisted Visualization of Participant-Weighted Themes in Qualitative Interviews, In Proceedings of the Fourth Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+ NLP). 2025:169–177 [In English]. https://doi.org/10.18653/v1/2025.hcinlp-1.14.</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">De-Marcos L., Domínguez-Díaz A. LLM-Based Topic Modeling for Dark Web Q&amp;A forums: A Comparative Analysis with Traditional Methods // IEEE Access. 2025. Vol. 13. P. 67159–67169. https://doi.org/10.1109/ACCESS.2025.3560543.</mixed-citation><mixed-citation xml:lang="en">De-Marcos L., Domínguez-Díaz A. LLM-Based Topic Modeling for Dark Web Q&amp;A forums: A Comparative Analysis with Traditional Methods. IEEE Access. 2025; 13:67159–67169 [In English]. https://doi.org/10.1109/ACCESS.2025.3560543.</mixed-citation></citation-alternatives></ref><ref id="cit16"><label>16</label><citation-alternatives><mixed-citation xml:lang="ru">Fei W., Niu X., Zhou P., Hou L., Bai B., Deng L., Han W. Extending Context Window of Large Language Models Via Semantic Compression // Findings of the Association for Computational Linguistics: ACL. 2024. P. 5169–5181. https://doi.org/10.18653/v1/2024.findings-acl.306.</mixed-citation><mixed-citation xml:lang="en">Fei W., Niu X., Zhou P., Hou L., Bai B., Deng L., Han W. Extending Context Window of Large Language Models Via Semantic Compression. Findings of the Association for Computational Linguistics: ACL. 2024:5169–5181 [In English]. https://doi.org/10.18653/v1/2024.findings-acl.306.</mixed-citation></citation-alternatives></ref><ref id="cit17"><label>17</label><citation-alternatives><mixed-citation xml:lang="ru">Gao Q., Feng D. Deploying large language models for discourse studies: An exploration of automated analysis of media attitudes // PloS one. 2025. Vol. 20(1). P. e0313932. https://doi.org/10.1371/journal.pone.0313932.</mixed-citation><mixed-citation xml:lang="en">Gao Q., Feng D. Deploying large language models for discourse studies: An exploration of automated analysis of media attitudes. PloS one. 2025; 20(1):e0313932 [In English]. https://doi.org/10.1371/journal.pone.0313932.</mixed-citation></citation-alternatives></ref><ref id="cit18"><label>18</label><citation-alternatives><mixed-citation xml:lang="ru">Halterman A., Keith K. A. Codebook LLMs: Evaluating LLMs as Measurement Tools for Political Science Concepts // arXiv preprint arXiv:2407.10747. 9 Jan. 2025. https://doi.org/10.48550/arXiv.2407.10747.</mixed-citation><mixed-citation xml:lang="en">Halterman A., Keith K.A. Codebook LLMs: Evaluating LLMs as Measurement Tools for Political Science Concepts. arXiv preprint arXiv:2407.10747. 9 Jan. 2025 [In English]. https://doi.org/10.48550/arXiv.2407.10747.</mixed-citation></citation-alternatives></ref><ref id="cit19"><label>19</label><citation-alternatives><mixed-citation xml:lang="ru">Herbst P., Baars H. Accelerating Literature Screening for Systematic Literature Reviews with Large Language Models-Development, Application, and First Evaluation of a Solution // LWDA: Learning, Knowledge, Data, Analysis. 2023. P. 41–51.</mixed-citation><mixed-citation xml:lang="en">Herbst P., Baars H. Accelerating Literature Screening for Systematic Literature Reviews with Large Language Models-Development, Application, and First Evaluation of a Solution. LWDA: Learning, Knowledge, Data, Analysis. 2023:41–51 [In English].</mixed-citation></citation-alternatives></ref><ref id="cit20"><label>20</label><citation-alternatives><mixed-citation xml:lang="ru">Jenner S.E., Raidos D., Anderson E., Fleetwood S., Ainsworth B., Fox K., Kreppner J., Barker M. Using Large Language Models for Narrative Analysis: A Novel Application of Generative AI // Methods in Psychology. 2025. Vol. 12. P. 100183. https://doi.org/10.1016/j.metip.2025.100183.</mixed-citation><mixed-citation xml:lang="en">Jenner S.E., Raidos D., Anderson E., Fleetwood S., Ainsworth B., Fox K., Kreppner J., Barker M. Using Large Language Models for Narrative Analysis: A Novel Application of Generative AI. Methods in Psychology. 2025; 12:100183 [In English]. https://doi.org/10.1016/j.metip.2025.100183.</mixed-citation></citation-alternatives></ref><ref id="cit21"><label>21</label><citation-alternatives><mixed-citation xml:lang="ru">Karjus A. Machine-Assisted Quantitizing Designs: Augmenting Humanities and Social Sciences with Artificial Intelligence // arXiv preprint arXiv:2309.14379. 20 Oct. 2024. https://doi.org/10.48550/arXiv.2309.14379.</mixed-citation><mixed-citation xml:lang="en">Karjus A. Machine-Assisted Quantitizing Designs: Augmenting Humanities and Social Sciences with Artificial Intelligence. arXiv preprint arXiv:2309.14379. 20 Oct. 2024 [In English]. https://doi.org/10.48550/arXiv.2309.14379.</mixed-citation></citation-alternatives></ref><ref id="cit22"><label>22</label><citation-alternatives><mixed-citation xml:lang="ru">Krippendorff, K. Computing Krippendorff’s Alpha-Reliability. Philadelphia: University of Pennsylvania, 2011.</mixed-citation><mixed-citation xml:lang="en">Krippendorff K. Computing Krippendorff’s Alpha-Reliability. Philadelphia: University of Pennsylvania, 2011 [In English].</mixed-citation></citation-alternatives></ref><ref id="cit23"><label>23</label><citation-alternatives><mixed-citation xml:lang="ru">Kulkarni A., Alotaibi F., Zeng X., Wu L., Zeng T., Yao B.M., Liu M., Zhang Sh., Huang L., Zhou D. Scientific Hypothesis Generation and Validation: Methods, Datasets, and Future Directions // arXiv preprint arXiv:2505.04651. 6 May 2025. https://doi.org/10.48550/arXiv.2505.04651.</mixed-citation><mixed-citation xml:lang="en">Kulkarni A., Alotaibi F., Zeng X., Wu L., Zeng T., Yao B.M., Liu M., Zhang Sh., Huang L., Zhou D. Scientific Hypothesis Generation and Validation: Methods, Datasets, and Future Directions. arXiv preprint arXiv:2505.04651. 6 May 2025 [In English]. https://doi.org/10.48550/arXiv.2505.04651.</mixed-citation></citation-alternatives></ref><ref id="cit24"><label>24</label><citation-alternatives><mixed-citation xml:lang="ru">Kuribayashi T., Oseki Yo., Brassard A., Inui K. Scientific Context limitations make neural language models more human-like // arXiv preprint arXiv:2205.11463. 1 Nov. 2022. https://doi.org/10.48550/arXiv.2205.11463.</mixed-citation><mixed-citation xml:lang="en">Kuribayashi T., Oseki Yo., Brassard A., Inui K. Scientific Context limitations make neural language models more human-like. arXiv preprint arXiv:2205.11463. 1 Nov. 2022 [In English]. https://doi.org/10.48550/arXiv.2205.11463.</mixed-citation></citation-alternatives></ref><ref id="cit25"><label>25</label><citation-alternatives><mixed-citation xml:lang="ru">Li X., Tang H., Chen S., Wang Z., Chen R., Abramet M. Why Does In-Context Learning Fail Sometimes? Evaluating in-Context Learning on Open and Closed Questions // arXiv preprint arXiv:2407.02028. 2 Jul. 2024. https://doi.org/10.48550/arXiv.2407.02028.</mixed-citation><mixed-citation xml:lang="en">Li X., Tang H., Chen S., Wang Z., Chen R., Abramet M. Why Does In-Context Learning Fail Sometimes? Evaluating in-Context Learning on Open and Closed Questions. arXiv preprint arXiv:2407.02028. 2 Jul. 2024 [In English]. https://doi.org/10.48550/arXiv.2407.02028.</mixed-citation></citation-alternatives></ref><ref id="cit26"><label>26</label><citation-alternatives><mixed-citation xml:lang="ru">Li Y. A Practical Survey on Zero-Shot Prompt Design for In-Context Learning // arXiv preprint arXiv:2309.13205. 22 Sep. 2023. https://doi.org/10.26615/978-954-452-092-2_069.</mixed-citation><mixed-citation xml:lang="en">Li Y. A Practical Survey on Zero-Shot Prompt Design for In-Context Learning. arXiv preprint arXiv:2309.13205. 22 Sep. 2023 [In English]. https://doi.org/10.26615/978-954-452-092-2_069.</mixed-citation></citation-alternatives></ref><ref id="cit27"><label>27</label><citation-alternatives><mixed-citation xml:lang="ru">Liu J., Yang Ch., Yan Zh., Ma X., Peiet L. Leveraging Generative AI through Prompt Engineering for Corpus Construction and In-Depth Intelligent Interpretation of Ancient Texts // Digital Scholarship in the Humanities. 2025. Vol. 40, Issue 3. P. 846–862. https://doi.org/10.1093/llc/fqaf043.</mixed-citation><mixed-citation xml:lang="en">Liu J., Yang Ch., Yan Zh., Ma X., Peiet L. Leveraging Generative AI through Prompt Engineering for Corpus Construction and In-Depth Intelligent Interpretation of Ancient Texts. Digital Scholarship in the Humanities. 2025; 40(3):846–862 [In English]. https://doi.org/10.1093/llc/fqaf043.</mixed-citation></citation-alternatives></ref><ref id="cit28"><label>28</label><citation-alternatives><mixed-citation xml:lang="ru">Manohar K., Pillai L. G. What is Lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations // Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024. P. 10864–10869. https://doi.org/10.18653/v1/2024.emnlp-main.607.</mixed-citation><mixed-citation xml:lang="en">Manohar K., Pillai L. G. What is Lost in Normalization? Exploring Pitfalls in Multilingual ASR Model Evaluations // Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024. P. 10864–10869. https://doi.org/10.18653/v1/2024.emnlp-main.607.</mixed-citation></citation-alternatives></ref><ref id="cit29"><label>29</label><citation-alternatives><mixed-citation xml:lang="ru">Piper A., Bagga S. Using Large Language Models for Understanding Narrative Discourse // Proceedings of the The 6th Workshop on Narrative Understanding. 2024. P. 37–46. https://doi.org/10.18653/v1/2024.wnu-1.4.</mixed-citation><mixed-citation xml:lang="en">Piper A., Bagga S. Using Large Language Models for Understanding Narrative Discourse/ In Proceedings of the The 6th Workshop on Narrative Understanding. 2024:37–46 [In English]. https://doi.org/10.18653/v1/2024.wnu-1.4.</mixed-citation></citation-alternatives></ref><ref id="cit30"><label>30</label><citation-alternatives><mixed-citation xml:lang="ru">Razavi A., Soltangheis M., Arabzadeh N., Salamat S., Zihayat M., Bagheri E. Benchmarking Prompt Sensitivity in Large Language Models // European Conference on Information Retrieval. Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2025. P. 303–313. https://doi.org/10.1007/978-3-031-88714-7_29.</mixed-citation><mixed-citation xml:lang="en">Razavi A., Soltangheis M., Arabzadeh N., Salamat S., Zihayat M., Bagheri E. Benchmarking Prompt Sensitivity in Large Language Models. In European Conference on Information Retrieval. Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2025:303–313 [In English]. https://doi.org/10.1007/978-3-031-88714-7_29.</mixed-citation></citation-alternatives></ref><ref id="cit31"><label>31</label><citation-alternatives><mixed-citation xml:lang="ru">Rueda A., Hassan M.S., Perivolaris A., Teferra B.G., Samavi R., Rambhatla S., Wu Y., Zhang Y., Cao B., Sharma D., Krishnan S., Bhatet V. Understanding LLM Scientific Reasoning through Promptings and Model’s Explanation on the Answers // arXiv preprint arXiv:2505.01482. 25 Jul. 2025. https://doi.org/10.48550/arXiv.2505.01482.</mixed-citation><mixed-citation xml:lang="en">Rueda A., Hassan M.S., Perivolaris A., Teferra B.G., Samavi R., Rambhatla S., Wu Y., Zhang Y., Cao B., Sharma D., Krishnan S., Bhatet V. Understanding LLM Scientific Reasoning through Promptings and Model’s Explanation on the Answers. arXiv preprint arXiv:2505.01482. 25 Jul. 2025 [In English]. https://doi.org/10.48550/arXiv.2505.01482.</mixed-citation></citation-alternatives></ref><ref id="cit32"><label>32</label><citation-alternatives><mixed-citation xml:lang="ru">Sebastian R., Kottekkadan N.N., Thomas T.K., Niyas M. Generative AI Tools (ChatGPT*) in Social Science Research // Journal of Information, Communication and Ethics in Society. 2025. Vol. 23(2). P. 284–290. https://doi.org/10.1108/JICES-10-2024-0145.</mixed-citation><mixed-citation xml:lang="en">Sebastian R., Kottekkadan N.N., Thomas T.K., Niyas M. Generative AI Tools (ChatGPT*) in Social Science Research. Journal of Information, Communication and Ethics in Society. 2025; 23(2):284–290 [In English]. https://doi.org/10.1108/JICES-10-2024-0145.</mixed-citation></citation-alternatives></ref><ref id="cit33"><label>33</label><citation-alternatives><mixed-citation xml:lang="ru">Sun Y., Kok S. Investigating the Effects of Cognitive Biases in Prompts on Large Language Model Outputs // arXiv preprint arXiv:2506.12338. 14 Jun. 2025. https://doi.org/10.48550/arXiv.2506.12338.</mixed-citation><mixed-citation xml:lang="en">Sun Y., Kok S. Investigating the Effects of Cognitive Biases in Prompts on Large Language Model Outputs. arXiv preprint arXiv:2506.12338. 14 Jun. 2025 [In English]. https://doi.org/10.48550/arXiv.2506.12338.</mixed-citation></citation-alternatives></ref><ref id="cit34"><label>34</label><citation-alternatives><mixed-citation xml:lang="ru">Tao Y., Shen Q. Academic Discourse on ChatGPT in Social Sciences: A Topic Modeling and Sentiment Analysis of Research Article Abstracts // PloS one. 2025. Vol. 20(10). P. e0334331. https://doi.org/10.1371/journal.pone.0334331.</mixed-citation><mixed-citation xml:lang="en">Tao Y., Shen Q. Academic Discourse on ChatGPT in Social Sciences: A Topic Modeling and Sentiment Analysis of Research Article Abstracts. PloS one. 2025; 20(10):e0334331 [In English]. https://doi.org/10.1371/journal.pone.0334331.</mixed-citation></citation-alternatives></ref><ref id="cit35"><label>35</label><citation-alternatives><mixed-citation xml:lang="ru">Törnberg P. Large Language Models Outperform Expert Coders and Supervised Classifiers at Annotating Political Social Media Messages // Social Science Computer Review. 2024. Vol. 43, Issue 6. https://doi.org/10.1177/08944393241286471.</mixed-citation><mixed-citation xml:lang="en">Törnberg P. Large Language Models Outperform Expert Coders and Supervised Classifiers at Annotating Political Social Media Messages. Social Science Computer Review. 2024; 43(6) [In English]. https://doi.org/10.1177/08944393241286471.</mixed-citation></citation-alternatives></ref><ref id="cit36"><label>36</label><citation-alternatives><mixed-citation xml:lang="ru">Wang X., Salmani M., Omidi P., Ren X., Rezagholizadeh M., Eshaghiet A. Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models // arXiv preprint arXiv:2402.02244. 29 May 2024. https://doi.org/10.48550/arXiv.2402.02244.</mixed-citation><mixed-citation xml:lang="en">Wang X., Salmani M., Omidi P., Ren X., Rezagholizadeh M., Eshaghiet A. Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models. arXiv preprint arXiv:2402.02244. 29 May 2024 [In English]. https://doi.org/10.48550/arXiv.2402.02244.</mixed-citation></citation-alternatives></ref><ref id="cit37"><label>37</label><citation-alternatives><mixed-citation xml:lang="ru">Zabaleta M., Lehman J. Simulating Tabular Datasets through LLMs to Rapidly Explore Hypotheses about Real-World Entities // arXiv preprint arXiv:2411.18071. 27 Nov. 2024. https://doi.org/10.48550/arXiv.2411.18071.</mixed-citation><mixed-citation xml:lang="en">Zabaleta M., Lehman J. Simulating Tabular Datasets through LLMs to Rapidly Explore Hypotheses about Real-World Entities. arXiv preprint arXiv:2411.18071. 27 Nov. 2024 [In English]. https://doi.org/10.48550/arXiv.2411.18071.</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
