Bilgisayar Mühendisliği Bölümü Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.11779/1940
Browse
Browsing Bilgisayar Mühendisliği Bölümü Koleksiyonu by Scopus Q "N/A"
Now showing 1 - 20 of 73
- Results Per Page
- Sort Options
Conference Object Citation - Scopus: 2A Visualization Platfom for Disk Failure Analysis(IEEE, 2018) Arslan, Şuayb Şefik; Yiğit, İbrahim Onuralp; Zeydan, EnginIt has become a norm rather than an exception to observe multiple disks malfunctioning or whole disk failures in places like big data centers where thousands of drives operate simultaneously. Data that resides on these devices is typically protected by replication or erasure coding for long-term durable storage. However, to be able to optimize data protection methods, real life disk failure trends need to be modeled. Modelling helps us build insights while in the design phase and properly optimize protection methods for a given application. In this study, we developed a visualization platform in light of disk failure data provided by BackBlaze, and extracted useful statistical information such as failure rate and model-based time to failure distributions. Finally, simple modeling is performed for disk failure predictions to alarm and take necessary system-wide precautions.Conference Object Citation - WoS: 1Citation - Scopus: 1Adaptive Boosting of Dnn Ensembles for Brain-Computer Interface Spellers(IEEE, 2021) Çatak, Yiğit; Aksoy, Can; Özkan, Hüseyin; Güney, Osman Berke; Koç, Emirhan; Arslan, Şuayb ŞefikSteady-state visual evoked potentials (SSVEP) are commonly used in brain computer interface (BCI) applications such as spelling systems, due to their advantages over other paradigms. In this study, we develop a method for SSVEP-based BCI speller systems, using a known deep neural network (DNN), which includes transfer and ensemble learning techniques. We test performance of our method on publicly available benchmark and BETA datasets with leave-one-subject-out procedure. Our method consists of two stages. In the first stage, a global DNN is trained using data from all subjects except one subject that is excluded for testing. In the second stage, the global model is fine-tuned to each subject whose data are used in the training. Combining the responses of trained DNNs with different weights for each test subject, rather than an equal weight, provide better performance as brain signals may differ significantly between individuals. To this end, weights of DNNs are learnt with SAMME algorithm with using data belonging to the test subject. Our method significantly outperforms canonical correlation analysis (CCA) and filter bank canonical correlation analysis (FBCCA) methods.Patent Adaptive Erasure Codes(2017) Arslan, Şuayb Şefik; Göker, TurguyMethods, apparatus, and other embodiments associated with adaptive use of erasure codes for distributed data storage systems are described. One example method includes accessing a message, where the message has a message size, selecting an encoding strategy as a function of the message size, data storage device failure statistics, data storage device wear periods, data storage space constraints, or overhead constraints, and where the encoding strategy includes an erasure code approach, generating an encoded message using the encoding strategy, generating an encoded block, where the encoded block includes the encoded mes sage and metadata associated with the message, and storing the encoded block in the data storage system. Example methods and apparatus may employ Reed Solomon erasure codes or Fountain erasure codes. Example methods and apparatus may display to a user the storage capacity and durability of the data storage system.Conference Object An Exploratory Study on the Effect of Contour Types on Decision Making Via Optic Brain Imaging Method (fnirs)(eScholarship, 2023) Demircioglu, Esin Tuna; Girişken, Yener; Çakar, TunaDecision-making is a combination of our positive anticipations from the future with the contribution of our past experiences, emotions, and what we perceive at the moment. Therefore, the cues perceived from the environment play an important role in shaping the decisions. Contours, which are the hidden identity of the objects, are among these cues. Aesthetic evaluation, on the other hand, has been shown to have a profound impact on decision-making, both as a subjective experience of beauty and as having an evolutionary background. The aim of this empirical study is to explain the effect of contour types on preference decisions in the prefrontal cortex through risk-taking and aesthetic appraisal. The obtained findings indicated a relation between preference decision, contour type, and PFC subregion. The results of the current study suggest that contour type is an effective cue in decision-making, furthermore, left OFC and right dlPFC respond differently to contour types.Conference Object Citation - WoS: 14Citation - Scopus: 40An Overview of Blockchain Technologies: Principles, Opportunities and Challenges(IEEE, 2018) Arslan, Şuayb Şefik; Mermer, Gültekin Berahan; Zeydan, EnginBlokzincir, toplumumuzun birbiriyle iletişim kurma ve ticaret yapma biçiminde devrim yapma potansiyeline sahip, yakın zamanda ortaya çıkmış olan bir teknolojidir. Bu teknolojinin sağladığı en önemli avantaj aracı gerektiren bir oluşumda güvenilir bir merkezi kuruma ihtiyaç duymadan değer taşıyan işlemleri değiş tokuş edebilmesidir. Ayrıca, veri bütünlüğü, dahili orijinallik ve kullanıcı şeffaflığı sağlayabilir. Blokzincir, birçok yenilikçi uygulamanın temel alınacağı yeni internet olarak görülebilir. Bu çalışmada, genel çalışma prensibi, oluşan fırsatlar ve ileride karşılaşılabilecek zorlukları içerecek şekilde güncel blokzincir teknolojilerinin genel bir görünümünü sunmaktayız.Conference Object Citation - Scopus: 3An Xml Parser for Turkish Wikipedia(IEEE, 2019) Demir, Şeniz; Vardar, Uluç Furkan; Devran, İlkay TevfikNowadays, visual and written data that can be easily accessed over the internet has enabled the development of research in many different fields. However, the availability of data is not sufficient by itself. It is of great importance that these data can be effectively utilized and interpreted in accordance with the requirements. Access to written content in the Wikipedia encyclopedia, which is becoming increasingly common in Turkish natural language processing, can be done via XML dumps. In this study, our aim is to develop and demonstrate the applicability of an XML parser for the processing of Turkish Wikipedia dumps. The use of the open-source parser, which allows information extraction at different levels of granularity, is reported on pages containing biography infoboxes and textual contents.Conference Object Citation - Scopus: 1Analytical Approaches in Customer Relationship Management(IEEE, 2023) Akata, Mustafa Aşkım; Ergin, Kaan; Kaya, Büşra; Kızılay, Ayşe; Çakar, Tuna; Şahin, ZeynepThis study examines the impact of analytical customer relationship management (aCRM) strategies, specifically the segmentation approach using RFM analysis and artificial learning methods, on customer satisfaction, revenue performance, and loyalty in businesses. The research adopts an approach that integrates data from both online and offline channels onto a single platform, providing a holistic view of customer behaviors. Combining the segmentation obtained through RFM analysis and artificial learning methods with timely campaigns has enhanced shopping opportunities for customers and increased customer satisfaction and loyalty. The use of aCRM as a strategic marketing and sales tool has enabled businesses to manage customer relationships more effectively. This paper contributes to the literature in this field by presenting in detail the advantages offered by aCRM, its application methods, and the results obtained.Conference Object Analyzing Consumer Behavior: the Impact of Retro Music in Advertisements on a Chocolate Brand and Consumer Engagement(IEEE, 2023) Girişken, Yener; Soyaltın, Tuğçe Ezgi; Filiz, Gözde; Çakar, Tuna; Türkyılmaz, Ceyda AysunaThis study presents research utilizing binary classification models to analyze consumer behaviors such as chocolate consumption and retro music ad viewing. Retro music, with its potential to evoke nostalgic feelings in consumers, is used in advertisements, which can have a significant impact on brand perception and consumer engagement. Firstly, a model focusing on chocolate consumption was developed and tested. The model yields significant outcomes. Secondly, a model based on retro music ad viewing status was developed and tested. Significant potential findings were obtained. This study emphasizes the applicability of effective classification models that can be used to understand and predict consumer behaviors, yielding significant outcomes.Conference Object Analyzing Customer Churn: a Comparative Study of Machine Learning Models on Pay-Tv Subscribers in Turkey(IEEE, 2023) Obalı, Emir; Çalışkan, Sibel Kırmızıgül; Karani Yılmaz, Veysel; Kara, Erkan; Meşe, Yasemin Kürtcü; Çakar, Tuna; Yıldız, Ayşenur; Hataş, Tuğce AydınUnderstanding the reasons for customer churn provides added value in terms of retaining existing customers, as customer attrition leads to revenue loss for companies and incurs marketing costs for acquiring new customers. In this study, the 6-month historical data of a Pay-TV company operating in Turkey was used, and due to the imbalanced nature of the dataset on a label basis, the oversampling method was applied. During the model development phase, various artificial learning algorithms (Random Forest, Logistic Regression, KNearest Neighbors, Decision Tree, AdaBoost, XGBoost, Extra Tree Classifier) were utilized, and their performances were compared. Based on the evaluation of success criteria for each model, it was observed that the tree-based Random Forest, Extra Tree Classifier and XGBoost achieved the highest performance for this dataset.Patent Artificial Intelligence Augmented Iterative Product Decoding(2023) Arslan , Şuayb Şefik; Göker, TurguyA method for product decoding within a data storage system includes receiving data to be decoded within a first decoder; performing a plurality of decoding iterations to decode the data utilizing a first decoder and a second decoder; and outputting fully decoded data based on the performance of the plurality of decoding iterations. Each of the plurality of decoding iterations includes (i) decoding the data with the first decoder operating at a first decoder operational mode to generate once decoded data; (ii) sending the once decoded data from the first decoder to the second decoder; (iii) receiving error information from the first decoder with an artificial intelligence system; (iv) selecting a second decoder operational mode based at least in part on the error information that is received by the artificial intelligence system; and (v) decoding the once decoded data with the second decoder operating at the second decoder operational mode to generate twice decoded data; and outputting fully decoded data based on the performance of the plurality of decoding iterations.Conference Object Attention-Enhanced Dual-Head LSTM With Rich Feature Engineering for Risk-Adjusted Stock Return Forecasting(Institute of Electrical and Electronics Engineers Inc., 2025) Patel J.; Gunes P.; Ertugrul S.; Sayar A.; Benli H.; Makaroglu D.; Cakar T.Stock return forecasting is a challenging task due to the complex, nonlinear, and volatile nature of financial markets. In this paper, we propose a comprehensive deep learning framework that integrates: a two-layer Long Short-Term Memory (LSTM) network augmented with a learnable attention mechanism, a dual-head output for simultaneous regression of next-day returns and classification of price direction, with an extensive suite of technical and macro-financial features. Our feature set comprises lagged log-returns, trend indicators (simple and exponential moving averages), momentum oscillators (RSI, MACD), volatility measures (rolling variance and GARCH conditional volatility), price bands (Bollinger Bands, Donchian channels), volume metrics (On-Balance Volume, Volume Rate of Change), Hidden Markov Model regime states, market index returns, and calendar effects. We train and validate the model using a rolling-window cross-validation scheme with early stopping and hyperparameter tuning to ensure temporal robustness. Empirical results on a large multi-stock dataset demonstrate that our attention-enhanced, dual-task LSTM outperforms single-task LSTMs and traditional machine learning benchmarks, achieving lower forecasting error and more stable generalization. © 2025 IEEE.Conference Object Citation - WoS: 2Citation - Scopus: 1Average Bandwidth-Cost Vs. Storage Trade-Off for Bs-Assisted Distributed Storage Networks(IEEE, 2021) Tengiz, Ayse Ceyda; Haytaoğlu, Elif; Pusane, Ali Emre; Arslan, Şuayb Şefik; Pourmandi, MassoudIn this study, we consider a hierarchically structured base station (BS)-assisted cellular system equipped with a backend distributed data storage in which nodes randomly arrive and depart the cell. We numerically motivate and characterize the fundamental trade-off between the average repair bandwidth cost versus storage space where BS communication cost (higher than that of local) and link capacity constraints exist while the number of failed nodes can vary dynamically. We establish the capacity region that is most relevant to 5G and beyond networks, which are layered by design. We hope that this study shall motivate novel regeneration code constructions that will be able to achieve the presented limits.Conference Object Churn Prediction for Subscription-Based Applications Using Machine Learning(Institute of Electrical and Electronics Engineers Inc., 2025) Gozukara H.; Patel J.; Kara E.; Yildiz A.; Mese Y.K.; Obali E.; Cakar T.In this study, a predictive model was developed using machine learning techniques to forecast customer churn in subscription-based video streaming services. The data such as user login records, content viewing information, subscription details, and content-related features were used to interpret usage patterns and customer churn was defined based on subscription renewal status and renewal timing. Several usage-based features are extracted for users and several algorithms were used for modeling, such as Random Forest, CatBoost, XGBoost, Logistic Regression, K-Nearest Neighbors, and Gradient Boosting. Occurring class imbalance on the target variable is handled via BorderLineSMOTE. The model's performance was evaluated using training-test accuracy plots, classification reports, and hyperparameter tuning. Even though most of the models performed similarly, the CatBoost model emerged as the top performer, achieving a macro F1-score of 0.60. However, while effective in identifying churners, the models struggled to precisely classify non-churning customers, a common challenge in imbalanced datasets even after applying oversampling techniques. The analysis of feature importance yielded a crucial insight, early and consistent user engagement is the strongest predictor of customer retention. These findings provide valuable, actionable insights for streaming platforms, emphasizing that retention strategies should focus on maximizing engagement immediately after a user subscribes. © 2025 IEEE.Conference Object Citation - WoS: 5Citation - Scopus: 7Cloud2hdd: Large-Scale Hdd Data Analysis on Cloud for Cloud Datacenters(IEEE, 2020) Zeydan, Engin; Arslan, Şefik ŞuaybThe main focus of this paper is to develop a distributed large scale data analysis platform for the opensource data of Backblaze cloud datacenter which consists of operational hard disk drive (HDD) information collected over an observable period of 2272 days (over 74 months). To carefully analyze the intrinsic characteristics of the hard disk behavior, we have exploited a large bolume of data and the benefits of Hadoop ecosystem as our big data processing engine. In other words, we have utilized a special distributed scheme on cloud for cloud HDD data, which is termed as Cloud2HDD. To classify the remaining lifetime of hard disk drives based on health indicators such as in-built S.M.A.R.T (Self-Monitoring, Analysis, and Reporting Technology) features, we used some of the state-of-the-art classification algorithms and compared their accuracy, precision, and recall rates simultaneously. In addition, importance of various S.M.A.R.T. features in predicting the true remaining lifetime of HDDs are identified. For instance, our analysis results indicate that Random Forest Classifier (RFC) can yield up to 94% accuracy with the highest precision and recall at a reasonable time by classifying the remaining lifetime of drives into one of three different classes, namely critical, high and low ideal states in comparison to other classification approaches based on a specific subset of S.M.A.R.T. features.Conference Object Combining Similar Trajectories and XGBoost via Residual Learning for Traffic Flow Forecasting(Institute of Electrical and Electronics Engineers Inc., 2025) Işlak U.; Yilmaz E.; Arslan I.; Çakar T.In this study, we propose novel hybrid forecasting models that integrate the method of similar trajectories with machine learning techniques, particularly the XGBoost algorithm, for traffic flow prediction. Traditional statistical models, such as ARIMA, often struggle to accurately capture the complex, non-linear patterns characteristic of traffic flow data. To address these limitations, we develop an additive hybrid forecasting framework that combines the strengths of linear models (similar trajectories method) and non-linear models (XGBoost). Our proposed methods are evaluated on two different stations from the California PEMS dataset. Experimental results demonstrate that the proposed hybrid models consistently outperform individual benchmark models, including ARIMA, standalone similar trajectories, and XGBoost. The superiority of the hybrid approach, particularly the XGBST model, is further validated through the Diebold-Mariano statistical test, confirming significant predictive improvements at various significance levels. Additionally, using weighted Euclidean distance within the similar trajectories method further enhanced forecasting accuracy. The interpretability and flexibility of our hybrid framework make it especially suitable for practical implementation in traffic management systems. These findings underline the effectiveness of hybrid modeling strategies in traffic flow forecasting and suggest future research directions, such as comprehensive hyperparameter optimization and broader validation across diverse datasets. © 2025 IEEE.Book Part Citation - Scopus: 1Consumer Neuroscience Perspective for Brands: How Do Brands Influence Our Brains?(IGI Global, 2020) Çakar, Tuna; Girişken, YenerNeuroscientific tools have increasingly been used by marketing practitioners and researchers to understand and explain several different questions that have been issued for a specific company or a general understanding. In this respect, the neuroscientific approach has been evaluated as a potential tool for understanding the neural mechanisms directly related to marketing with its contribution to providing novel perspectives. The chapter addresses one of the most relevant subjects, brands, for issuing the strategic role of applied neuroscience in marketing and consumer behavior. The first section of this chapter focuses on a novel definition of brand, and the next section covers the brand image, brand perception, and brand loyalty. The second section summarizes the main findings regarding the neuroscience of brands. In the final section, the findings from a related experiment have been provided for the potential roles of neuromarketing for developing marketing strategies for brands.Conference Object Corner Detection by Local Zernike Moments(2015) Ozbulak, Gokhan; Gökmen, MuhittinIn this paper, our corner-based interest point detector, Robust Local Zernike Moment based Features (R-LZMF), which was proved to be scale, rotation and translation-invariant, is investigated for invariance against affine transformation, lighting and blurring. Furthermore, R-LZMF's corner detection capability with Zernike moments of order 4 is theoretically explained in detail. Experimental results on the Inria Dataset show that R-LZMF outperforms SIFT, CenSurE, ORB, BRISK and competes with SURF in terms of repeatability for images under affine transformation and photometric deformation such as lighting and blurring.Conference Object Curvature Effect on Aesthetic Perception(Cognitive Science Society, 2022) Demircioğlu, Tuna Esin; Çakar, Tuna; Girişken, YenerAesthetic perception is an inseparable part of the decision-making process in daily life. It also is an important partof the beauty and therefore tastes. The determination of preferences is directly related to the subregions of the PFC.The contour is the essential visual attribute for accurately perceiving the form of an object. It has been known thatsharp angles cause an implicit perception of threat, and perceived security is related to aesthetic pleasure. The aim ofthe study is to investigate the effect of contour type on decision making and aesthetic perception in PFC. The studyusing the fNIRS method has shown that there is a marginal significant relation between liking, contour type, and PFCareas (F(3.81)=2.225, p>.092, η2=.076). Current findings suggest that left mPFC, FPC, and right dlPFC have a significantcontribution to the liking of curved objects.Patent Data Deduplication With Adaptive Erasure Code Redundancy (us20160013815a1)(2016) Arslan, Şuayb Şefik; Wideman, Roderick; Lee, Jaewook; Göker, TurguyExample apparatus and methods combine erasure coding with data deduplication to simultaneously reduce the overall redundancy in data while increasing the redundancy of unique data. In one embodiment, an efficient representation of a data set is produced by deduplication. The efficient rep resentation reduces duplicate data in the data set. Redundancy is then added back into the data set using erasure coding. The redundancy that is added back in adds protection to the unique data associated with the efficient representation. How much redundancy is added back in and what type of redundancy is added back in may be controlled based on an attribute (e.g., value, reference count, symbol size, number of symbols) of the unique data. Decisions concerning how much and what type of redundancy to add back in may be adapted over time based, for example, on observations of the efficiency of the overall system.Conference Object Citation - WoS: 3Citation - Scopus: 3Data Repair in Bs-Assisted Distributed Data Caching(IEEE, 2020) Kaya, Erdi; Haytaoğlu, Elif; Arslan, Şuayb ŞefikIn this paper, centralized and independent repair approaches based on device-to-device communication for the repair of the lost nodes have been investigated in a cellular network where distributed caching is applied whose fault tolerance is provided by erasure codes. The caching mechanisms based on Reed-Solomon codes and minimum bandwidth regenerating codes are adopted. The proposed approaches are analyzed in a simulation environment in terms of base station utilization load during the repair process. Based on the intuitive assumption that the base station is usually more costly than device-to-device communication, the centralized repair approach demonstrates a better performance than the independent repair approaches on the number of symbols retrieved from the base station. On the other hand, the centralized approach has not achieved a dramatic reduction in the number of symbols downloaded from the other devices.

