Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Gunes, Peri"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Attention-Enhanced Dual-Head LSTM With Rich Feature Engineering for Risk-Adjusted Stock Return Forecasting
    (Institute of Electrical and Electronics Engineers Inc., 2025) Patel J.; Gunes P.; Ertugrul S.; Sayar A.; Benli H.; Makaroglu D.; Cakar T.; Benli, Harun; Gunes, Peri; Patel, Jay; Makaroglu, Didem; Sayar, Alperen; Cakar, Tuna; Ertugrul, Seyit
    Stock return forecasting is a challenging task due to the complex, nonlinear, and volatile nature of financial markets. In this paper, we propose a comprehensive deep learning framework that integrates: a two-layer Long Short-Term Memory (LSTM) network augmented with a learnable attention mechanism, a dual-head output for simultaneous regression of next-day returns and classification of price direction, with an extensive suite of technical and macro-financial features. Our feature set comprises lagged log-returns, trend indicators (simple and exponential moving averages), momentum oscillators (RSI, MACD), volatility measures (rolling variance and GARCH conditional volatility), price bands (Bollinger Bands, Donchian channels), volume metrics (On-Balance Volume, Volume Rate of Change), Hidden Markov Model regime states, market index returns, and calendar effects. We train and validate the model using a rolling-window cross-validation scheme with early stopping and hyperparameter tuning to ensure temporal robustness. Empirical results on a large multi-stock dataset demonstrate that our attention-enhanced, dual-task LSTM outperforms single-task LSTMs and traditional machine learning benchmarks, achieving lower forecasting error and more stable generalization. © 2025 IEEE.
  • Loading...
    Thumbnail Image
    Conference Object
    Financial Inputs Prediction with Machine Learning and Covariance Matrix Applications
    (Institute of Electrical and Electronics Engineers Inc., 2025) Benli, Harun; Gunes, Peri; Ulkgun, Mert; Cakar, Tuna
    Reliable estimation of the time-varying covariance matrix of asset returns is indispensable for portfolio construction, risk budgeting, and automated advisory services. Conventional estimators-rolling-window sample covariances, EWMA filters, and GARCH families-remain anchored to historical prices and therefore adapt slowly when market conditions pivot. To overcome this latency, we propose an end-to-end, machine-learning-driven framework that forecasts future covariances directly from high-frequency equity data, largely decoupling risk estimation from past observations. Our pipeline ingests heterogeneous stock feeds through a real-time API, applies error-minimising imputation (forward/backward fill, spline, VAR, wavelet, and co-kriging), and standardises returns via empirically selected scaling schemes. The processed features are then fed to a model zoo comprising linear and penalised regressions, tree ensembles (Random Forest, XGBoost, LightGBM, CatBoost), and kernel-based Support Vector Regression. Weekly walk-forward evaluation on a universe of Turkish insurance equities shows that LightGBM and SVR cut out-of-sample covariance prediction error by up to 35 % versus classical benchmarks. We embed the predicted matrices into five allocation engines-Markowitz mean-variance, Black-Litterman, minimum-variance, Risk Parity, and CVaR optimisation-demonstrating that Risk Parity delivers the most consistent variance reduction across 15 stock pairs, while Black-Litterman excels for idiosyncratic combinations such as ANSGR-AKGRT. A granular analysis reveals that day-to-day sign changes in returns create structural breaks that generic regressors miss; augmenting the architecture with a volatility-state classifier and regime-specific learners markedly sharpens turning-point detection. Beyond statistical gains, the framework is production-ready: it is fully implemented in Python, runs on cloud notebooks, and plugs into robo-advisory dashboards. The study thus bridges academic advances in covariance prediction with operational portfolio management, paving the way for broader cross-sector deployment and future research on deep sequential models, transaction-cost awareness, and multi-asset scalability. © 2025 Elsevier B.V., All rights reserved.
  • Loading...
    Thumbnail Image
    Conference Object
    Graph Theory-Based Fraud Detection in Banking Check Transactions
    (Institute of Electrical and Electronics Engineers Inc., 2025) Behsi Z.; Memis E.C.; Ertugrul S.; Sayar A.; Gunes P.; Seydioglu S.; Cakar T.; Gunes, Peri; Memis, Emir Cetin; Sayar, Alperen; Cakar, Tuna; Ertugrul, Seyit; Seydioglu, Sarper; Behsi, Zeynep
    Traditional banking fraud detection systems rely on rule-based approaches that analyze individual transactions in isolation, failing to capture complex relationship patterns indicative of coordinated fraud schemes such as check-kiting and artificial credit score manipulation. We p resent our study, a novel similarity-based graph theory approach that constructs weighted networks between check issuers using Jaccard Similarity Index and employs advanced graph analysis to identify suspicious entity clusters without requiring complete transaction relationship data. Our approach combines Jaccard Similarity Index for behavioral pattern analysis (addressing payee information unavailability) with comprehensive graph analysis including centrality measures, community detection, and anomaly identification. Through comprehensive evaluation on real banking data containing 458,399 transactions from 121,647 unique issuers - the largest confirmed dataset in fraud detection literature - we demonstrate the effectiveness of our methodology. Following parameter optimization using grid search methodology (similarity threshold: 0.55, risk percentile: 0.75), our study achieves competitive detection rates in optimal configurations with an average F1-score of 0.447 (±0.164) and peak performance reaching an F1-score of 0.557, while providing superior network topology analysis with 0.923 clustering coefficient. The system operates under significant data privacy constraints, lacking personal identification information (names, account numbers, IDs) and complete payee data. Despite these limitations, our study outperforms traditional approaches by leveraging similarity-based indirect relationships, and we project that performance could reach 85-95% levels with complete data access. © 2025 IEEE.
  • Loading...
    Thumbnail Image
    Conference Object
    Multi-Output Vs Single-Output Deep Learning for Plant Disease Detection
    (Institute of Electrical and Electronics Engineers Inc., 2025) Taha Kara H.B.; Sayar A.; Gunes P.; Guvencli M.; Ertugrul S.; Cakar T.; Sayar, Alperen; Taha Kara, Hasan Bedri; Guvencli, Mert; Cakar, Tuna; Ertugrul, Seyit; Gunes, Peri
    AI-based image processing plays a crucial role in agriculture by enabling early detection of plant diseases, thereby increasing crop productivity and minimizing economic losses. In this study, we present a comparative analysis between a multi-output deep learning model, which simultaneously classifies plant species and assesses their health status, and two separate single-output models trained for each task individually. The publicly available PlantVillage dataset was used for training and evaluation. Performance metrics such as classification accuracy, F1 score, training time, and confusion matrices were used to assess each model. Our results indicate that the multi-output architecture achieves remarkably high classification performance (Plant: 99.98%, Health: 99.78%) while significantly reducing training time by nearly 50% compared to the combined cost of training two individual models. This demonstrates that a unified model not only provides computational efficiency but also maintains predictive strength, making it a practical alternative for real-time agricultural decision support systems. The findings suggest that integrated modeling can contribute to the development of scalable, resource-efficient solutions in precision agriculture. © 2025 IEEE.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback