Dil Modelleri ile Akademik Özet Üretimi
Loading...

Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers Inc.
IEEE
IEEE
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
In recent years, large language models have demonstrated extraordinary capabilities in natural language processing tasks. The integration of these models to text summarization has highlighted the need for evaluating varying model performances under a standardized benchmarking framework. In this study, the performance of different large language models in generating abstracts of scientific papers which has a common structure and unique language is compared through an extensive experimental analysis. The abstracts automatically generated by these models using prompt engineering were evaluated via various evaluation metrics based on content overlap and semantic similarity. The results that we obtained demonstrated the effectiveness of large language models in abstract generation. © 2025 Elsevier B.V., All rights reserved.
In recent years, large language models have demonstrated extraordinary capabilities in natural language processing tasks. The integration of these models to text summarization has highlighted the need for evaluating varying model performances under a standardized benchmarking framework. In this study, the performance of different large language models in generating abstracts of scientific papers which has a common structure and unique language is compared through an extensive experimental analysis. The abstracts automatically generated by these models using prompt engineering were evaluated via various evaluation metrics based on content overlap and semantic similarity. The results that we obtained demonstrated the effectiveness of large language models in abstract generation.
In recent years, large language models have demonstrated extraordinary capabilities in natural language processing tasks. The integration of these models to text summarization has highlighted the need for evaluating varying model performances under a standardized benchmarking framework. In this study, the performance of different large language models in generating abstracts of scientific papers which has a common structure and unique language is compared through an extensive experimental analysis. The abstracts automatically generated by these models using prompt engineering were evaluated via various evaluation metrics based on content overlap and semantic similarity. The results that we obtained demonstrated the effectiveness of large language models in abstract generation.
Description
Isik University
Keywords
Large Language Models, Benchmarking, Text Summarization, Scientific Publications
Fields of Science
Citation
WoS Q
N/A
Scopus Q
N/A

OpenCitations Citation Count
N/A
Source
-- 33rd IEEE Conference on Signal Processing and Communications Applications, SIU 2025 -- Istanbul; Isik University Sile Campus -- 211450
33rd Conference on Signal Processing and Communications Applications-SIU-Annual -- Jun 25-28, 2025 -- Istanbul, Türkiye
33rd Conference on Signal Processing and Communications Applications-SIU-Annual -- Jun 25-28, 2025 -- Istanbul, Türkiye
Volume
Issue
Start Page
1
End Page
4
PlumX Metrics
Citations
Scopus : 0
Captures
Mendeley Readers : 1
Google Scholar™


