Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Permanent URI for this collectionhttps://hdl.handle.net/20.500.11779/1926
Browse
Browsing Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection by Language "tr"
Now showing 1 - 20 of 51
- Results Per Page
- Sort Options
Conference Object Dog Walker Segmentation(IEEE, 2022) Ercan, Alperen; Karan, Baris; Çakar, TunaIn this study dog walkers were separated into clusters according to walkers' walk habits. Due to the fact that the distributions were non-normal, normalization algorithms were applied before the onset of clustering. After normalizing, K Means algorithm and Gaussian Mixture Models used for finding optimum cluster count. According to these clusters, walkers' consecutive months separated to follow-up their behavioral traits. This part of the study adds value to the project to examine walkers' behaviors closer.Conference Object İnternet Trafik Hızının Tahmininde Derin Öğrenme ve Ağaç Tabanlı Modellerin Karşılaştırılması(Institute of Electrical and Electronics Engineers Inc., 2025) Filiz, Gozde; Altıntaş, Suat; Yıldız, Ayşenur; Kara, Erkan; Drias, Yassine; Çakar, TunaThis study addresses the prediction of internet traffic speed using time-dependent data from an internet service provider through different modeling approaches. On an anonymized dataset, the performance of the moving average method, various deep learning models (N-BEATS, N-HITS, TimesNet, TSMixer, LSTM), and the XGBoost regression model enhanced with feature engineering was compared. Time series cross-validation and random hyperparameter search were used for model training. According to the results, the XGBoost model achieved the highest accuracy with 98.7% explained variance (R2), while among the deep learning models, N-BEATS and N-HITS achieved the best performance with R2 values around 90%. The findings indicate that tree-based methods supported by carefully selected features can offer higher accuracy and computational efficiency compared to complex deep learning models in internet traffic forecasting. © 2025 Elsevier B.V., All rights reserved.Conference Object Citation - Scopus: 2A Visualization Platfom for Disk Failure Analysis(IEEE, 2018) Arslan, Şuayb Şefik; Yiğit, İbrahim Onuralp; Zeydan, EnginIt has become a norm rather than an exception to observe multiple disks malfunctioning or whole disk failures in places like big data centers where thousands of drives operate simultaneously. Data that resides on these devices is typically protected by replication or erasure coding for long-term durable storage. However, to be able to optimize data protection methods, real life disk failure trends need to be modeled. Modelling helps us build insights while in the design phase and properly optimize protection methods for a given application. In this study, we developed a visualization platform in light of disk failure data provided by BackBlaze, and extracted useful statistical information such as failure rate and model-based time to failure distributions. Finally, simple modeling is performed for disk failure predictions to alarm and take necessary system-wide precautions.Article Citation - Scopus: 9Bilinçli-farkındalık Temelli Öz-yeterlik Ölçeği-yenilenmiş (bföö-y): Türkiye Uyarlama Çalışması(2017) Taylan, Rukiye Didem; Bulgan, Gökçe; Atalay, Zümra; Aydın, UtkunBu araştırmanın amacı, Cayoun, Francis, Kasselis ve Skilbeck (2012) tarafından geliştirilen "Bilinçli- Farkındalık Temelli Öz-yeterlik Ölçeği-Yenilenmiş"i (Mindfulness-Based Self Efficacy Scale-Revised) Türkçe'ye uyarlayarak geçerlik ve güvenirliğini araştırmaktır. Özgün ölçek İngilizce'dir ve altı boyutta toplam 22 maddeden oluşan beşli likert tipi bir ölçme aracıdır. Uyarlanan Türkçe form iki farklı devlet okulunun 5., 6. ve 7. sınıflarında okuyan 713 öğrenciye uygulanmıştır. Tüm ölçek (?= .72) ve ölçeğin Duygu Düzenleme (?= .73), Duygusal Denge (?= .68), Sosyal Beceriler (?= .65), Sıkıntı Tahammülü (?= .62), Sorumluluk Alma (?= .61) ve Kişilerarası Etkenlik (?= .65) alt boyutları için Cronbach Alfa içtutarlık katsayıları her bir alt boyutta yer alan düşük madde sayısı göz önüne alındığında kabul edilebilir seviyededir. Ayırt edici geçerlik analizleri kız ve erkeklerin bilinçli-farkındalık temelli öz-yeterlik ortalama puanları arasında anlamlı bir fark olmadığını gösterirken sınıf düzeyi açısından anlamlı farklılıklar gözlemlenmiştir. Analiz sonuçları, Türkçe'ye uyarlama çalışması gerçekleştirilen bu ölçeğin öğrencilerin bilinçli-farkındalık temelli öz-yeterlik düzeylerini belirlemede geçerli ve güvenilir bir ölçme aracı olduğunu göstermektedir. Sonuçların kuramsal ve yöntemsel uygulamaları tartışılmıştırConference Object Citation - WoS: 15Citation - Scopus: 40An Overview of Blockchain Technologies: Principles, Opportunities and Challenges(IEEE, 2018) Arslan, Şuayb Şefik; Mermer, Gültekin Berahan; Zeydan, EnginBlokzincir, toplumumuzun birbiriyle iletişim kurma ve ticaret yapma biçiminde devrim yapma potansiyeline sahip, yakın zamanda ortaya çıkmış olan bir teknolojidir. Bu teknolojinin sağladığı en önemli avantaj aracı gerektiren bir oluşumda güvenilir bir merkezi kuruma ihtiyaç duymadan değer taşıyan işlemleri değiş tokuş edebilmesidir. Ayrıca, veri bütünlüğü, dahili orijinallik ve kullanıcı şeffaflığı sağlayabilir. Blokzincir, birçok yenilikçi uygulamanın temel alınacağı yeni internet olarak görülebilir. Bu çalışmada, genel çalışma prensibi, oluşan fırsatlar ve ileride karşılaşılabilecek zorlukları içerecek şekilde güncel blokzincir teknolojilerinin genel bir görünümünü sunmaktayız.Conference Object Eaft: Evolutionary Algorithms for Gcc Flag Tuning(IEEE, 2022) Tagtekin, Burak; Çakar, TunaDue to limited resources, some methods come to the fore in finding and applying the factors that affect the working time of the code. The most common one is choosing the correct GCC flags using heuristic algorithms. For the codes compiled with GCC, the selection of optimization flags directly affects the speed of the processing, however, choosing the right one among hundreds of markers during this process is a resource consuming problem. This article explains how to solve the GCC flag optimization problem with EAFT. Rather than other autotuner tools such as Opentuner, EAFT is an optimized tool for GCC marker selection. Search infrastructure has been developed with particle swarm optimization and genetic algorithm with diffent submodels rather than using only Genetic Algorithm like FOGA. © 2022 IEEE.Conference Object Citation - WoS: 1Citation - Scopus: 2Improving the Usage of Subword-Based Units for Turkish Speech Recognition(IEEE, 2020) Çetinkaya, Gözde; Saraçlar, Murat; Arısoy, EbruSubword units are often utilized to achieve better performance in speech recognition because of the high number of observed words in agglutinative languages. In this study, the proper use of subword units is explored in recognition by a reconsideration of details such as silence modeling and position-dependent phones. A modified lexicon by finite-state transducers is implemented to represent the subword units correctly. Also, we experiment with different types of word boundary markers and achieve the best performance by adding a marker both to the left and right side of a subword unit. In our experiments on a Turkish broadcast news dataset, the subword models do outperform word-based models and naive subword implementations. Results show that using proper subword units leads to a relative word error rate (WER) reductions, which is 2.4%, compared with the word level automatic speech recognition (ASR) system for Turkish.Conference Object Citation - WoS: 1Citation - Scopus: 1Domain Adaptation Approaches for Acoustic Modeling(IEEE, 2020) Arısoy, Ebru; Fakhan, EnverIn the recent years, with the development of neural network based models, ASR systems have achieved a tremendous performance increase. However, this performance increase mostly depends on the amount of training data and the computational power. In a low-resource data scenario, publicly available datasets can be utilized to overcome data scarcity. Furthermore, using a pre-trained model and adapting it to the in-domain data can help with computational constraint. In this paper we have leveraged two different publicly available datasets and investigate various acoustic model adaptation approaches. We show that 4% word error rate can be achieved using a very limited in-domain data.Conference Object Yapay Öğrenme Tabanlı Mikrofaktoring Skorlama Modeli ve Kredi Risk Yönetim Sistemi Geliştirilmesi(Institute of Electrical and Electronics Engineers Inc., 2025) Sayar, Alperen; Ates, Yigit; Ertugrul, Seyit; Turan, Elif Naz; Drias, Yassine; Çakar, TunaCredit scoring systems are critical tools used by factoring institutions to assess the credit risks of SME businesses seeking microloans. This study presents a comprehensive predictive modeling framework that achieves 82.67% ROC-AUC with 65.34% Gini score on test data, demonstrating robust discriminative capability despite significant class imbalance. Our ensemble approach outperforms individual boosting models by leveraging their complementary strengths in payment behavior analysis and fraud detection. The raw data was cleaned, transformed, and optimized using the Polars library, with specialized features for detecting fraud patterns and time-based risk indicators. When implementing a score threshold of 950, our model significantly improves the detection of non-performing loans (NPL) compared to traditional rule-based approaches by reducing the net deficit from 6.59% to 2.62%. When applied to previously rejected applications, the model projects a potential 762.57% increase in transaction count and 747.05% growth in transaction volume. © 2025 Elsevier B.V., All rights reserved.Conference Object Predicting Credit Repayment Capacity With Machine Learning Models(Ieee, 2024) Filiz, Gozde; Bodur, Tolga; Yaslidag, Nihal; Sayar, Alperen; Çakar, TunaThis study examines the transformation in the financial services sector, particularly in banking, driven by the rapid development of technology and the widespread use of big data, and its impact on credit prediction processes. The developed credit prediction model aims to more accurately predict customers' credit repayment capacities. In pursuit of this goal, demographic and financial data along with credit histories of customers have been utilized to employ data preprocessing techniques and test various classification algorithms. Findings indicate that models developed with XGBoost and CATBoost algorithms exhibit the highest performance, while the effective use of feature engineering techniques is revealed to enhance the model's accuracy and reliability. The research highlights the potential for financial institutions to gain a competitive advantage in risk management and customer relationship management by leveraging machine learning models.Conference Object Citation - Scopus: 1Cnn-Based Emotion Recognition Using Data Augmentation and Preprocessing Methods(Institute of Electrical and Electronics Engineers Inc., 2023) Toktaş, Tolga; Kırbız, Serap; Kayaoğlu, BoraIn this paper, a system that recognizes emotion from human faces is designed using Convolutional Neural Networks (CNN). CNN is known to perform well when trained with a large database. The lack of large and balanced publicly available databases that can be used by deep learning methods for emotion recognition is still a challenge. To overcome this problem, the number of data is increased by merging FER+, CK+ and KDEF databases; and preprocessing is applied to the face images in order to reduce the variations in the database. Data augmentation methods are used to reduce the imbalance in the data distribution that still remains despite the increasing number of data in the merged database. The CNN-based method developed using database merging, image preprocessing and data augmentation, achieved emotion recognition with 80% accuracy.Conference Object Citation - WoS: 1Citation - Scopus: 1Face Recognition With Local Zernike Moments Features Around Landmarks(IEEE, 2016) Gökmen, MuhittinIn this paper, a new method that extracts the features from the complex Local Zernike Moments (LZM) images around facial landmarks is proposed. In this method, multiple grids which are in different sizes are located on landmarks and Phase-Magnitude (PM) histograms are calculated in each cells of these grids. The PM histograms are calculated for every component of LZM and the feature vectors are created by concatenating these histograms. By reducing the dimensionality of these vectors using Whitened Principle Component Analysis, more robust descriptors are constructed. It is shown that the state-of-the-art results are obtained in the experiments performed on FERET database using the proposed method. © 2016 IEEE.Conference Object Citation - Scopus: 1Physical Activity Monitoring With Smartwatch Technology in Adolescents and Obtaining Big Data: Preliminary Findings(Ieee, 2024) Filiz, Gozde; Arman, Nilay; Ayaz, Nuray Aktay; Yekdaneh, Asena; Albayrak, Asya; Bozkan, Tunahan; Çakar, TunaThis study assesses the potential of smartwatch technology in monitoring adolescents' physical activity and health parameters. It focuses on the role of physical activity in preventing chronic diseases and improving quality of life. The primary aim of the project is to perform statistical analysis of the large data sets collected from both healthy adolescents and those with chronic rheumatic diseases, and to develop a machine learning-based classification model to distinguish between these two groups. This analysis highlights the issue of physical inactivity observed during the Covid-19 pandemic, while showcasing the capacity of technology to offer solutions. The study aims to evaluate the collected data in a way that forms the basis for personalized activity plans for adolescents, demonstrating how wearable technology and big data can be effectively used in health services and to promote physical activity.Conference Object Citation - Scopus: 5High-Performance Real-Time Data Processing: Managing Data Using Debezium, Postgres, Kafka, and Redis(IEEE, 2023) Çakar, Tuna; Ertuğrul, Seyit; Arslan, Şuayip; Sayar, Alperen; Akçay, AhmetThis research focuses on monitoring and transferring logs of operations performed on a relational database, specifically PostgreSQL, in real-time using an event-driven approach. The logs generated from database operations are transferred using Apache Kafka, an open-source message queuing system, and Debezium running on Kafka, to Redis, a non-relational (No-SQL) key-value database. Time-consuming query operations and read operations are performed on Redis, which operates on memory (in-memory), instead of on the primary database, PostgreSQL. This approach has significantly improved query execution performance, data processing time, and backend service performance. The study showcases the practical application of an event-driven approach using Debezium, Kafka, Redis, and relational databases for real-time data processing and querying.Conference Object Evaluating Electrophysiological Responses Due To Identity Judgments(Ieee, 2024) Çakar, Tuna; Hohenberger, AnnetteThis study was conducted to explore how the brain processes decisions about identity, employing event-related potentials (ERPs) as a measure. The aim was to ascertain if the EEG/ERP technique could be used to monitor the cognitive processing of identity judgments as they happen. The investigation focused on comparing two groups of statements: those that used the concept of 'same' and those that used 'different'. The researchers hypothesized that there would be notable differences in the ERPs, particularly around the 400-millisecond mark, correlating with the reaction time disparities observed behaviorally. The ERP data revealed that the 'different' statements generated a unique N400 response when contrasted with the 'same' statements, implying that the participants' cognitive responses to these two types of judgments were not the same.Conference Object Determination of Alzheimer's Disease Levels by Ordinal Logistic Regression and Artificial Learning Algorithms(Ieee, 2024) Bulut, Nurgül; Çakar, Tuna; Arslan, Ilker; Akinci, Zeynep Karaoglu; Oner, Kevser SetenayThis study compares artificial learning algorithms and logistic regression models in determining different levels of Alzheimer's disease (AD). The research uses demographic, genetic, and neurocognitive inventory results obtained from the National Alzheimer's Coordination Center (NACC) database, along with brain volume/thickness measurements derived from MRI scanners. Deep Neural Networks, Ordinal Logistic Regression, Random Forest, Gaussian Naive Bayes, XGBoost, and LightGBM models were employed to determine the 4 different ordinal levels of AD. Although there were similarities between the accuracy rate, F1 score, AUC value, and sensitivity, specificity, and precision performance measures of each class, the highest classification rate was achieved by the Random Forest model where the oversampling was not applied. (F1 score: 0.86; accuracy: 0.86 and AUC: 0.95). The outputs of the model with the best performance were explained with the SHAP (SHapley Additive exPlanations) method. These findings indicate that non-invasive markers and artificial learning models can be used effectively in early diagnosis and decision support systems to predict different levels of Alzheimer's disease.Conference Object Citation - WoS: 2Citation - Scopus: 3Data Repair in Bs-Assisted Distributed Data Caching(IEEE, 2020) Kaya, Erdi; Haytaoğlu, Elif; Arslan, Şuayb ŞefikIn this paper, centralized and independent repair approaches based on device-to-device communication for the repair of the lost nodes have been investigated in a cellular network where distributed caching is applied whose fault tolerance is provided by erasure codes. The caching mechanisms based on Reed-Solomon codes and minimum bandwidth regenerating codes are adopted. The proposed approaches are analyzed in a simulation environment in terms of base station utilization load during the repair process. Based on the intuitive assumption that the base station is usually more costly than device-to-device communication, the centralized repair approach demonstrates a better performance than the independent repair approaches on the number of symbols retrieved from the base station. On the other hand, the centralized approach has not achieved a dramatic reduction in the number of symbols downloaded from the other devices.Article Citation - Scopus: 4Yeni Koronavirüs (COVID-19) Sürecinde Türkiye’de Üniversite Kütüphaneleri(Üniversite ve Araştırma Kütüphanecileri Derneği, 2020) Gürdal, Gültekin; Çanak, Tuba Akbaytürk; Çuhadar, Sami; Çimen, ErtuğrulBu çalışmanın amacı Yeni Koronovirüs (COVID-19) nedeniyle binalarını kapatarak yüz yüze kullanıcı hizmetlerine ara vermek ya da askıya almak durumunda kalan üniversite kütüphanelerini tespit etmek, hizmetlerine devam edenlerin hizmetlerini hangi koşullarda, hangi sürelerle verdiğini belirlemek ve kütüphanelerin yeniden açılması durumunda yapılması gerekenler ile ilgili önerilerde bulunmaktır. Bu amacı bütünsel bir çerçevede yakalayabilmek için üniversite kütüphanelerinin farklı paydaşları olan yayıncıların, meslek örgütlerinin bu dönemdeki çalışmaları da küresel düzeyde incelenmiştir. Çalışmanın bir diğer amacı ise üniversite kütüphanelerinin altyapı, bütçe, koleksiyon, kullanıcı eğitimi, personel açısından uzaktan hizmet vermeye ne kadar hazır olduklarını, ani gelişen bu durum nedeniyle yaşanan sorunları tespit etmek ve sorunlar için çözüm önerileri sunmaktır. Çalışmanın temelini oluşturan veriler 209 (129 Devlet Üniversitesi, 75 Vakıf Üniversitesi, 5 Vakıf MYO) yükseköğretim kurumunun kütüphane yöneticilerine gönderilen anket sorularına verilen cevaplardan elde edilmiştir. Ankete 84 kurum katılmıştır. Anket sonuçları Surveey.com ve MS Excel ile analiz edilmiş ve görselleştirilmiştir.Conference Object Customer Segmentation and Churn Prediction via Customer Metrics(IEEE, 2022) Bozkan, Tunahan; Cakar, Tuna; Sayar, Alperen; Ertugrul, SeyitIn this study, it is aimed to predict whether customers operating in the factoring sector will continue to trade in the next three months after the last transaction date, using data-driven machine learning models, based on their past transaction movements and their risk, limit and company data. As a result of the models established, Loss Analysis (Churn) of two different customer groups (Real and Legal factory) was carried out. It was estimated by the XGBoost model with an F1 Score of 74% and 77%. Thanks to this modeling, it was aimed to increase the retention rate of customers through special promotions and campaigns to be made to these customer groups, together with the prediction of the customers who will leave. Thanks to the increase in retention rates, a direct contribution to the transaction volume on a company basis was ensured.Conference Object Noise Effect on Forecasting(IEEE, 2023) Tuncer, Suat; Kayan, Ersan; Çakar, TunaThe lack of regulation and liquidity in crypto money markets causes higher volatility compared to other financial markets. This situation increases the noise in price change. The high noise and random walk create a problem that cannot be explained by traditional stochastic financial methods. For this reason, a multi-layered deep learning model with an additive attention layer, which uses a single observation in 10-day sequences, was used in this study. Different transformations are used to reduce the noise of the closing values. As a result of the comparisons made between different approaches, it has been revealed that exponential moving averages, to be used as the value to predict, give better results than other conversions and estimation of the original price, since they explain the price better than simple moving averages and reduce the noise of the original price.
- «
- 1 (current)
- 2
- 3
- »

