keyboard_arrow_up
Accepted Papers
Binary Logarithm and Antilogarithm Calculations Using Uniformly Segmented Linear Regression

Dat Ngo, Korea National University of Transportation, South Korea

ABSTRACT

Digital signal processing (DSP) applications fundamentally rely on arithmetic operations such as addition, multiplication, and division. These operations present significant challenges when implemented on resource-constrained edge devices due to their complexity and high computational demands. The logarithmic number system offers a promising solution to mitigate these challenges by reducing delay and hardware area, although often at the cost of precision. This paper introduces an efficient method for computing binary logarithms and antilogarithms using uniformly segmented linear regression. In the proposed approach, the input signal is decomposed into integer and fractional components. Linear regression models are developed for computing logarithms and antilogarithms within equally spaced intervals of the fractional part, with the appropriated model selected based on a predefined number of the most significant bits of the fractional component. Comparative evaluations against benchmark techniques demonstrate the proposed method’s superiority in terms of computational accuracy, making it well-suited for DSP tasks in edge computing environments.

Keywords

Digital signal processing, logarithmic number system, uniform segmentation, linear regression.


Enhancing Educational QA Systems: Integrating Knowledge Graphs and Large Language Models for Context-Aware Learning

Ayse Arslan, USA

ABSTRACT

This study explores the integration of Knowledge Graphs (KG) and Large Language Models (LLMs) to design a question-answering (QA) system in the field of education. The proposed method involves constructing a KG using LLMs, retrieving contextual prompts from high-quality learning resources, and enhancing these prompts to generate accurate answers to real questions related to major educational concepts. The technical framework outlined in this paper, along with the analysis of results, contributes to the advancement of LLM applications in educational technology. The findings provide a foundation for developing intelligent, context-aware educational systems that leverage structured knowledge to support learning and enhance educational outcomes.

Keywords

Knowledge Graphs, LLMs, Large Language Models


An AI-Powered Mobile Application for Reducing Food Waste Through Cost Estimation and User Awareness

John Q Liu1, Tyler Boulom2, 1Deerfield Academy, 2Computer Science Department, California State Polytechnic University, USA

ABSTRACT

This paper addresses the critical issue of food waste, which contributes to economic losses and environmental harm [1]. We propose a mobile application, Foodnomics, leveraging AI technology to estimate food waste costs and raise awareness among users [2]. The app utilizes image recognition to identify leftover food, estimate portion sizes, and calculate associated costs. Our experiments revealed strong accuracy in classifying common food items but highlighted challenges with less familiar items and regional price discrepancies. The proposed solution builds on existing methodologies by focusing on individual consumer behavior and providing actionable insights. By empowering users to track and reduce their food waste, our project offers a scalable and impactful tool for promoting sustainability and reducing food insecurity [3]. The study concludes with recommendations for further improvements, including expanding the training dataset and incorporating real-time pricing models.

Keywords

Food Waste Reduction, AI-Powered Cost Estimation, Image Recognition Technology, Sustainability, Consumer Behavior

An Immersion Campus Simulation Protection and Safety Improvement System using Artificial Intelligence and Machine Learning

Ella Xing and Laurie Delinois, California State Polytechnic University, USA

ABSTRACT

This project addresses the critical need for immersive crisis response training to enhance individuals ability to act effectively under high-stress conditions, such as fires and active shooter events [1]. Traditional training methods often lack the depth and adaptability required for effective skill retention. To bridge this gap, we developed a virtual reality (VR) program that simulates realistic crisis environments using Unity’s physics, collider interactions, and coroutines for dynamic fire propagation and evasive maneuvers [2]. Key components include scenario selection, interactive obstacle navigation, and outcome evaluation, all designed to offer users an engaging training experience. Initial challenges, such as optimizing control responsiveness, were resolved by incorporating an FPS controller, ensuring accurate and responsive movement [3]. Experimental trials across scenarios demonstrated notable improvements in user response accuracy and engagement, with the FPS controller outperforming standard simulators. This VR training tool offers substantial benefits to schools, workplaces, and communities by providing accessible, impactful training that strengthens emergency preparedness.

Keywords

Crisis Response Training, Virtual Reality Simulation, Emergency Preparedness, Immersive Learning Technology.

Interactive Web: Leveraging Ai-driven Code Generation to Simplify Algorithm Development for Novice Programmers in Web-based Applications

Kai Zhang1, Vyom Kumar2, Max Zhang3, 1Moreau Catholic High School, 2Moreau Catholic High School, 2Irvington High School, USA

ABSTRACT

In the history of computer science, the development of algorithms has been key to composing the many programs used everyday by civilians, such as programs for texting, entertainment, research, and anything on the internet [9]. However, all of these programs are expensive and time-intensive to create. Even if completing a singular project is not particularly difficult, the knowledge required to create that project might take years to learn. Since the development of AI, the process of fabricating algorithms has been shortened greatly. Anybody with internet access can make software using AI, but a novice in programming would still struggle with creating new code from a Pseudocode created by AI [10]. By utilizing the OpenAI API, we created InteractiveWeb, a program that takes input from a user, sends it to ChatGPT, and, after creating an initial model of that program, develops a new and improved version of that previous code. With this program, the AI could accurately develop instructed projects, allowing it to create the fully working games of Minesweeper, Connect 4, and even Chess.

Keywords

AI, OpenAI, ChatGPT, Code Generation, Software Engineering using AI, AI-created Software


Rule by Rule: Learning with Confidence Through Vocabulary Expansion

Albert Nossig, Tobias Hell, and Georg Moser, Department of Computer Science, University of Innsbruck, Data Lab Hell GmbH, Europastraße 2a, 6170 Zirl, Tyrol, Austria

ABSTRACT

In this paper, we present an innovative iterative approach to rule learning specifically designed for (but not limited to) text-based data. Our method focuses on progressively expanding the vocabulary utilized in each iteration resulting in a significant reduction of memory consumption. Moreover, we introduce a Value of Confidence as an indicator of the reliability of the generated rules. By leveraging the Value of Confidence, our approach ensures that only the most robust and trustworthy rules are retained, thereby improving the overall quality of the rule learning process. We demonstrate the effectiveness of our method through extensive experiments on various textual as well as non-textual datasets including a use case of significant interest to insurance industries, showcasing its potential for real-world applications.

Keywords

Rule Learning, Explainable Artificial Intelligence, Text Categorization, Reliability of Rules.


Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation

Maksim Kuprashevich, Grigorii Alekseenko, and Irina Tolstykh, R&D Department, Salutedev, Uzbekistan

ABSTRACT

Multimodal Large Language Models (MLLMs) have recently gained immense popularity. Powerful commercial models like ChatGPT and Gemini, as well as open-source ones such as LLaVA, are essentially general-purpose models and are applied to solve a wide variety of tasks, including those in computer vision. These neural networks possess such strong general knowledge and reasoning abilities that they have proven capable of working even on tasks for which they were not specifically trained. We compared the capabilities of the most powerful MLLMs to date: ShareGPT4V, ChatGPT-4V/4O, LLaVA-Next in a specialized task of age and gender estimation with state-of-the-art specialized model. This comparison has yielded some interesting results and insights about the strengths and weaknesses of the participating models. Furthermore, we attempted various ways to fine-tune the ShareGPT4V model for this specific task, aiming to achieve state-of-the-art results in this particular challenge. Although such a model would not be practical in production, as it is incredibly expensive compared to a specialized model, it could be very useful in some tasks, like data annotation.

Keywords

MLLM, VLM, Human Attribute Recognition, Age estimation, Gender estimation, Large Models Generalization


A Smart Chrome Extension for Web Content Analysis and Distraction Mitigation using Artificial Intelligence and Machine Learning

Zhiyuan Zhu and Garret Washburn, California State Polytechnic University, USA

ABSTRACT

The temptation to lose oneself in the digital world is more prevalent now in the present state of the internet than ever before. Many internet users often fall into this temptation, voiding their efforts to get any productive work done that they may need to utilize their internet browser. To solve this problem, this paper proposes the ScreenTimeSage Chrome Extension available on the Chrome Web Store [12]. The ScreenTimeSage Chrome Extension seeks to remedy this issue of internet users getting off task but utilizing AI to analyze their specific browsing behavior and analyzing where and how they tend to get off task. Once the extension has analyzed how the user gets off task, it is able to provide customized support and advice on how to help the user stay focused. This extension relies on the Gemini LLM API, as it utilizes prompt engineering and Chrome storage to keep an active record of the user’s browsing habits, and classifying how productive the user is on each web page [13]. This extension has been tested for its content classification as well as its back-end server reliability, which is available in this paper. Both experiments are demonstrations of good results from the extension. The Screen Time Sage Chrome Extension is ultimately a great option for those seeking a way to combat their inclination to get distracted while using their browser, as it provides customized insights on how they individually get distracted and how they can combat this for themselves.

Keywords

Productivity, AI-driven browsing analysis, Distraction, Chrome extension.


A Smart Cardiovascular Risk Assessment and Rehabilitation Treatment Suggestion System using Artificial Intelligence and Data Science

Gengshuo Wang and Morris Blaustein, California State Polytechnic University, USA

ABSTRACT

Cardiovascular diseases (CVD) are the leading cause of death worldwide, underscoring the urgent need for accessible and personalized health management solutions [1]. This research presents CRIC, a mobile app that leverages AI to generate personalized Cardiac Risk Factor Scores and provide tailored recommendations. By integrating user input, AI-driven analysis, and a secure database, CRIC delivers actionable health insights and reliable educational resources to users [2]. Experiments involving 10 participants demonstrated high user satisfaction, with significant knowledge improvement as post-test scores increased by 30%. While challenges such as data accuracy and navigation were identified, iterative enhancements address these issues effectively. CRIC offers an innovative approach to bridging the gaps in traditional cardiovascular risk assessment, empowering users to make informed decisions about their health and contributing to global efforts in preventive healthcare.

Keywords

AI-Powered Analysis, Cardiovascular Health, Mobile Health App, Health Education, Preventive Healthcare


The Evolution of Human Thought

Martin Nimbach, Hochschule Bonn-Rhein-Sieg, Bonn, Germany

ABSTRACT

This document explores the evolution of human thought as a result of biological, social, and technological advancements. It traces the development of cognitive abilities from early humans’ instinct-driven actions to the emergence of abstract reasoning, complex planning, and self-reflection. A critical milestone in this evolution was the development of language, enabling not only interpersonal communication but also internal dialogue, which laid the foundation for conscious thought, social organization, and cultural development. The prefrontal cortex is identified as a key brain region for deliberate decision-making, operating serially to ensure precision despite limitations in processing capacity. This structure contrasts with the "outer brain," which processes vast amounts of sensory data in parallel to support immediate, survival-driven responses. The distinction between these systems aligns with the concepts of "System 1" and "System 2" thinking, as introduced by Kahneman and Tversky, highlighting the balance between intuitive and analytical thought processes. The invention of writing, described as "System 3" thinking, transformed human cognition by externalizing memory and facilitating the preservation and dissemination of knowledge. This advancement enabled cultural and scientific progress, culminating in the invention of the printing press, which democratized access to information. The internet represents a further leap in cognitive evolution, creating a collaborative and interactive global knowledge system. Large Language Models (LLMs) are positioned as a form of "System 4" thinking, offering conversational access to the world’s collective knowledge. While these AI tools expand human cognitive capabilities, the document emphasizes the need for critical evaluation of their outputs to avoid the pitfalls of misinformation and over-reliance on automated systems. By examining the progression of human thought from instinctual behaviour to augmented cognition with AI, the document underscores how evolving tools and technologies have continually shaped the boundaries of human intelligence and societal development.

Keywords

Speed of Thought, Brain Data Processing, Language, Writing, AI, Large Language Models


Towards Polyglot Data Processing in Social Networks using the Hadoop-Spark ecosystem

Antony Seabra de Medeiros, Sergio Lifschitz, Departamento de Informatica - PUC-Rio, Brazil

ABSTRACT

This article explores the use of the Hadoop-Spark ecosystem for social media data processing, adopting a polyglot approach with the integration of various computation and storage technologies, such as Hive, HBase and GraphX. We discuss specific tasks involved in processing social network data, such as calculating user influence, counting the most frequent terms in messages and identifying social relationships among users and groups. We conducted a series of empirical performance assessments, focusing on executing selected tasks and measuring their execution time within the Hadoop-Spark cluster. These insights offer a detailed quantitative analysis of the performance efficiency of the ecosystem tools. We conclude by highlighting the potential of the Hadoop-Spark ecosystem tools for advancing research in social networks and related fields.

Keywords

Social Networks, Hadoop, Spark ecosystem, Data Processing

A Smart Caffeine Level Predicting and Analysis Solution With Sequential Machine Learning Model Using Artificial Intelligence and Computer Vision

Tianrui Zhang1, Ang Li2, 1School of Professional Studies, Northwestern University, 633 Clark St, Evanston, IL 60208, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840

ABSTRACT

Caffeine is the most widely consumed stimulant globally, yet its overconsumption poses significant health risks. Traditional methods for measuring caffeine content, such as weighing coffee, can be impractical in everyday settings. This paper proposes an innovative solution that leverages artificial intelligence (AI) and machine learning, specifically utilizing a Sequential Convolutional Neural Network (CNN), to predict caffeine levels based on image analysis of coffee. The system processes images to determine brightness, correlating this data with caffeine concentration on a defined scale. Challenges such as dataset selection, prediction accuracy variability, and training epoch limitations were addressed through data cleaning and iterative model training. Experiments revealed that the model achieves a high accuracy rate, indicating its potential as a practical tool for consumers aiming to monitor their caffeine intake. This application not only enhances user convenience but also promotes healthier consumption practices by providing a reliable method for estimating caffeine levels visually

Keywords

Caffeine Level, Artificial Intelligence, Biology


Electronic Games in Education: Use of Media as a Pedagogical Resource

Renan Colzani da Rocha, Alexia Silva da Silveira Araujo, Daniel Lima Lins de Vasconcellos, Letícia Debastiani Frana, Flavio Andaló and Milton Luiz Horn Vieira, Department of Management, Media and Technology, Federal University of Santa Catarina, Florianópolis, Brazil

ABSTRACT

This article seeks to analyze the theoretical bases that support the construction of a fun educational game, elaborating a structure for the creation of games through theoretical concepts studied from the perspective of psychology and its relationship with this type of media. In view of the growth in the consumption of electronic games as entertainment among children and young people in the twenty-first century, we thought about the construction of an educational electronic game that contains three elements that are considered part of what makes games fun, the challenge, the fantasy or narrative, and curiosity. Using as an example the use of historical narrative as an educational tool developed in the games of the Assassins Creed series, a project was created to develop an educational game set in Florianópolis in the 1950s, starring the artist and folklorist Franklin Cascaes, and based on the short stories published in the book The Fantastic on the Island of Santa Catarina. This article presents the results of the historical research carried out for the development of the games first map, as well as a summary of its narrative and mechanics.

Keywords

Education, Flow, Franklin Cascaes, Electronic Games, Narrative.


A Comprehensive Web Application for Art Showcasing Utilizing Generative AI for Art Therapy Applications

Athena Bi1, Carlos Gonzalez2, 1Los Alamitos High School, 3591 W Cerritos Ave, Los Alamitos, CA 90720, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Art therapy is a proven method for fostering self-expression and supporting mental health, yet traditional approaches often require resource-intensive, manual analysis by therapists, limiting accessibility [1]. Our project addresses this gap by creating an AI-powered art therapy web application that enables users to upload artwork and receive personalized, therapeutic insights. By combining Firebase for user management, Python Flask for backend processing, and OpenAI for generating art interpretations, our platform bridges the expressive power of art with the analytical capabilities of AI [2]. Key challenges included integrating diverse technologies seamlessly and designing prompts that guide AI to interpret abstract and symbolic art effectively. These challenges were addressed through standardized API protocols and iterative prompt refinement. Experiments tested the AIs accuracy and consistency, showing strong alignment with expert interpretations for symbolic art and reliable outputs across multiple analyses. Our application democratizes access to art therapy, offering scalable, non-verbal pathways to self-discovery and mental wellness for a broader audience.

Keywords

Comprehensive, Web App, Generative AI, Art Therapy Application.


Revolutionizing Supply Chain Finance With Blockchain Technology: a Comprehensive Literature Review

Zengkui Zhao1, 2, Soodamani Ramalingam1, Alexios Mylonas1, 1Physics engineering and computer science, University of Hertfordshire, London, UK, 2Department of Computer Science, Shanghai Jian Qiao University, Shanghai, China

ABSTRACT

This literature review explores the transformative potential of blockchain in Supply Chain Finance (SCF), focusing on enhancing Small and Medium-sized Enterprises (SMEs) creditworthiness in the supply chain. Blockchain provides three key benefits—transparency, efficiency, and security—which comprehensively address major SCF challenges such as information asymmetry, high transaction costs, and fraud. Despite these advantages, SMEs face significant barriers to adoption, including technological limitations, organizational resistance, and regulatory uncertainties. Through a systematic review of 52 academic articles, this study highlighted blockchain’s ability to improve SMEs’ financial inclusion by streamlining credit assessments and fostering trust among stakeholders. Future research focuses on scalability, interoperability, and cross-border applications to fully realize blockchain’s potential in reshaping SCF.

Keywords

Blockchain, Supply Chain Finance (SCF), Small and Medium-sized Enterprises (SMEs), Financial Inclusion, Trust.


Intelligent Honeypot for Web Applications: Leveraging Seq2seq and Reinforcement Learning for Adaptive Attacker Interaction

Ananya Varadarajan, Ashwin Chandrasekaran, Rachana Binumohan, Rahul Huliyar Ravishankar, Dr. Gokul Kannan Sadasivam, Department of Computer Science Engineering, PES University, 560100, Bengaluru, India

ABSTRACT

An intelligent honeypot system designed to mimic legitimate websites using Sequence-to-Sequence (Seq2Seq) learning and Deep Q-Learning. The system generates realistic, contextually appropriate responses to attacker queries, prolonging interactions and providing insights into malicious behaviors while safeguarding actual systems. The Seq2Seq model, trained on 100,000 HTTP request-response pairs, enables the honeypot to produce responses that closely resemble those of real servers, enhancing its ability to deceive attackers. Deep Q-Learning optimizes engagement by selecting the most effective responses through a custom reward function, balancing realism and interactivity to maximize session length. Performance was evaluated using metrics such as Response Realism Rate (RRR), Semantic Consistency Accuracy (SCA), and Average Session Length (ASL). The honeypot achieved an RRR of 92.3%, an SCA of 89.7%, and a 94.5% Optimal Response Selection Rate (ORSR). These advancements increased ASL by 143.5%, from 3.2 to 7.8 exchanges, reflecting prolonged attacker engagement. By integrating Seq2Seq and Deep Q-Learning, this honeypot demonstrates significant improvements in generating convincing responses and sustaining interactions. These results contribute to modern cybersecurity by providing a practical and theoretical framework for developing next-generation honeypots capable of deceiving attackers and gathering actionable intelligence.

Keywords

Cybersecurity, Deep Q-Learning, Intelligent Honeypots, Seq2Seq, Web Application Security.


A Stock and ETF Value and Trends Prediction Method with Mobile Platform using Machine Learning and Data Mining

Tianrui Zhang1, Ang Li2, 1School of Professional Studies, Northwestern University, 633 Clark St, Evanston, IL 60208, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840

ABSTRACT

Caffeine is the most widely consumed stimulant globally, yet its overconsumption poses significant health risks. Traditional methods for measuring caffeine content, such as weighing coffee, can be impractical in everyday settings. This paper proposes an innovative solution that leverages artificial intelligence (AI) and machine learning, specifically utilizing a Sequential Convolutional Neural Network (CNN), to predict caffeine levels based on image analysis of coffee. The system processes images to determine brightness, correlating this data with caffeine concentration on a defined scale. Challenges such as dataset selection, prediction accuracy variability, and training epoch limitations were addressed through data cleaning and iterative model training. Experiments revealed that the model achieves a high accuracy rate, indicating its potential as a practical tool for consumers aiming to monitor their caffeine intake. This application not only enhances user convenience but also promotes healthier consumption practices by providing a reliable method for estimating caffeine levels visually

Keywords

Caffeine Level, Artificial Intelligence, Biology


Electronic Games in Education: Use of Media as a Pedagogical Resource

Bohong Tian1, David Garcia21CS Department, Irvine Valley College, 5500 Irvine Center Dr, Irvine, CA 92618, 2David T. Garcia, University of California, Los Angeles, CA 90095

ABSTRACT

In this paper, we presents the development of a stock and ETF forecasting mobile platform using machine learning and deep learning algorithms. The platform collects historical stock data using web scraping techniques (Beautiful Soup) from sources such as Yahoo Finance and Google Finance. We applied Linear Regression, SVM, and LSTM models to predict stock trends and evaluated their performance based on accuracy and efficiency.The experiments showed that while Linear Regression offers quick predictions, it lacks the ability to handle complex market behavior. SVM performed better for classifying trend directions but struggled during periods of volatility. LSTM networks, though computationally intensive, provided the highest accuracy by capturing sequential patterns in the data. This multi-model approach ensures the platform offers investors both speed and precision, providing actionable insights. Future improvements will focus on incorporating hyperparameter optimization and sentiment analysis to further enhance predictive performance.

Keywords

Stock, Trends Prediction, Machine Learning.


Deep Reinforcement Learning with Pairwise Modeling for the Weapon-target Assignment Problem

Valentin Colliard, Alain Peres, and Vincent Corruble,Sorbonne Universit´e, CNRS, LIP6 Thales LAS France

ABSTRACT

In this paper, we introduce two deep reinforcement learning approaches designed to tackle the challenges of air defense systems. StarCraft II has been used as a game environment to create attack scenarios where the agent must learn to defend its assets and points of interest against aerial units. Our agent, by estimating a value for each weapon-target pair, shows an ability to be robust in multiple scenarios to truly distinguish and prioritize targets in order to protect its units. These two methods, one using multi- layer perceptrons and the other using the attention mechanism, are compared with rule-based algorithms. Through empirical evaluation, we validate their efficacy in achieving resilient defense trategies across diverse and dynamic environments.

Keywords

Deep Reinforcement Learning, Weapon-Target Assignment, Simulation


E-governance in the Agricultural Sector in India: a Decade of Transformation (2014-2024)

Rizwan Shaikh, MES’S Neville Wadia Institute of Management Studies and Research, Pune

ABSTRACT

Over the past decade, e-governance has emerged as a critical tool in transforming Indias agricultural sector. With over 50% of the Indian population reliant on agriculture, digitization and e-governance initiatives have helped bridge the gap between farmers and governmental support, fostering transparency, efficiency, and inclusion. Descriptive statistics are vital in summarizing and interpreting data related to agriculture web portals. By presenting central tendencies, variability, and frequency distributions, this paper provides insights into user demographics, portal usage patterns, and overall satisfaction levels. The Mantri Kisan Samman Nidhi (PM-KISAN) scheme, launched in 2019 by the Government of India, aims to provide direct income support to small and marginal farmers. Under this scheme, eligible farmers receive ₹6,000 annually, paid in three equal instalments. Using data collected from government reports, surveys, and farmer responses across various states, the results show high participation rates but highlight challenges related to payment delays, registration issues, and digital literacy. This paper analyzes the evolution of e-governance in Indian agriculture from 2014 to 2024, evaluating the impact of digital initiatives on farmer welfare, agricultural productivity, and government service delivery. Also focuses on key metrics from several major agricultural web portals in India, including Pradhan PM-KISAN, e-NAM, Kisan Suvidha, etc. Using case studies, data analysis, and a review of policies, this research highlights the successes and challenges of e-governance in rural India and provides recommendations for future improvements.


Reward Hacking in Reinforcement Learning and RLHF: a Multidisciplinary Examination of Vulnerabilities, Mitigation Strategies, and Alignment Challenges

Tiechuan Hu1, Wenbo Zhu2, Yuqi Yan3, 1Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA, 2Physical Science Division, University of Chicago, Chicago, IL, USA, 3Olin Business School, Washington University in St. Louis, Saint Louis, MO, USA

ABSTRACT

Reinforcement Learning (RL) agents optimize policies based on provided rewards, yet may exploit unintended loopholes in the reward design, a phenomenon known as reward hacking. With the rise of Reinforcement Learning from Human Feedback (RLHF), this issue has gained new urgency, especially in large language models (LLMs) where subtle misalignments between the reward model and true human intentions can produce outputs that superficially meet criteria while failing to embody authentic correctness or values. Reward hacking is not confined to RL; it also affects domains such as sustainability planning, healthcare analytics, finance, social media sentiment analysis, and computer vision, where imperfect metrics can guide systems astray. This paper provides a comprehensive, multidisciplinary examination of reward hacking vulnerabilities. We elucidate how misaligned metrics can produce superficial success, connect RLHF challenges to analogous issues in other fields, and discuss advanced strategies for mitigating reward hacking. These include formalizing reward hacking definitions, employing interpretable and causal models, embracing multi- objective optimization, human-in-the-loop oversight, and emerging frameworks like decoupled approval mechanisms. Beyond technical solutions, we also consider ethical and societal implications. By integrating insights from multiple sectors, we highlight a path toward more reliably aligned AI systems that genuinely uphold human values.

Keywords

Reward Hacking, Reinforcement Learning, RLHF, Alignment, Metric Design, Interpretability, Causal Inference.


An Intuitive AI-driven System to Empower Emotional Well-being Through Smart Journaling and Personalized Action Plans

Courtney Keyi Lee1, Jonathan Thamrun2, 1Northwood High School, 4515 Portola Pkwy, Irvine, CA 92620, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Mental health challenges, particularly depression and anxiety, have risen sharply in recent years, exacerbated by barriers such as stigma, cultural differences, and lack of accessible treatment. This paper proposes an AI-driven journaling app as a solution to address these issues by offering personalized, culturally sensitive, and engaging mental health support. The app leverages natural language processing (NLP) and adaptive learning to generate tailored prompts and advice, fostering emotional awareness and self-reflection. Key challenges included interpreting complex emotions and addressing cultural and generational nuances. These were tackled through dynamic AI models, curated datasets, and iterative refinement. Experiments tested the app’s accuracy in emotion detection and its ability to adapt advice to diverse contexts, achieving promising results in relevance and cultural alignment. By integrating inclusivity and real-time feedback, this app offers a practical, scalable tool for mental well-being, making it an accessible alternative to traditional mental health support.

Keywords

Natural Language Processing, Flutter, Emotional Analysis, Dart.


Exploiting Rule-based Models for Out of Distribution Detection in Image Classification

Sara Narteni1, Andrea Cappelli2, and Maurizio Mongelli1, 1CNR-IEIIT, Corso F.M. Perrone, 24, 16152, Genoa, Italy, 2University of Genoa, Genoa, Italy

ABSTRACT

In the rapidly growing field of trustworthy Artificial Intelligence (AI), Out of Distribution (OoD) detection is an important challenge, dealing with ensuring that a deployed machine learning (ML) model becomes aware of inputs not conforming with those seen during training. In AI-based vision systems implementing, among others, image classification tasks, this assurance reveals fundamental to guarantee the robustness of the systems themselves. In this paper, we propose a method that leverages a rule-based classification model and Uniform Manifold Approximation and Projection (UMAP) for image dimensionality reduction to perform OoD detection with neither distributional assumptions nor heavy parameter settings. We test the proposed approach both on benchmark data and in a meaningful real-world case study dealing with the recognition of anomalies in robotic navigation throughout corridors, managing to efficiently detect the anomaly cases by setting an OoD detection problem.

Keywords

Out Of Distribution detection, image classification, robustness, explainable AI.


A Counterfactual-based Approach to Prevent Crowding in Intelligent Subway Systems

Alberto Carlevaro, Marta Lenatti, Alessia Paglialonga, and Maurizio Mongelli, Cnr-Istituto di Elettronica e di Ingegneria dell’Informazione e delle Telecomunicazioni, 00185 Rome, Italy

ABSTRACT

Today, the cities we live in are far from being truly smart: overcrowding, pollution, and poor transportation management are still in the headlines. With wide-scale deployment of advanced Artificial Intelligence (AI) solutions, however, it is possible to reverse this course and apply appropriate countermeasures to take a step forward on the road to sustainability. In this research, explainable AI techniques are applied to provide public transportation experts with suggestions on how to control crowding on subway platforms by leveraging interpretable, rule-based models enhanced with counterfactual explanations. The experimental scenario relies on agent-based simulations of the De Ferrari Hitachi subway station of Genoa, Italy. Numerical results for both prediction of crowding and counterfactual (i.e., countermeasures) properties are encouraging. Moreover, an assessment of the quality of the proposed explainable methodology was submitted to a team of experts in the field to certify and validate the model.

Keywords

Explainable AI, counterfactual explanation, crowding prediction, smart public transportation, data-driven modelling.


Residual Aware Stacking: a Novel Approach for Improved Machine Learning Model Performance

Hardev Ranglani1, EXL Service Inc.

ABSTRACT

Traditional stacking ensembles in Machine Learning aggregate predictions from multiple models to improve accuracy, but they often fail to address the residual errors left by base models. This paper introduces Residual- Aware Stacking (RAS), a novel approach that trains additional models to predict residuals (errors) of base models, creating a second layer of predictions. These, combined with the original base model predictions, are used to train a meta-model for the final output. This meta-model leverages both the original predictions and residual corrections to produce the final output. We demonstrate the improved accuracy and robustness of this technique by applying it on various regression datasets and comparing their performance against traditional models. This method highlights the potential of residual modeling in enhancing ensemble learning.

Keywords

Stacking, Ensemble Methods, Residual Models, Meta Learners, Regression Modeling


Addressing Big Data Challenges with Soft Computing Approaches

Anil Kumar Jonnalagadda and Praveen Kumar Myakala, Independent Researchers, Dallas, Texas, USA

ABSTRACT

The exponential growth of data has outpaced traditional computing systems, necessitating innovative approaches for processing, managing, and extracting actionable insights. This paper explores the transformative potential of soft computing techniques—fuzzy logic, neural networks, and evolutionary algorithms—to address critical challenges in Big Data, including uncertainty, imprecision, and noise in real-world datasets. These methods offer unparalleled adaptability and scalability for diverse applications. We propose a comprehensive framework that integrates hybrid approaches, such as neuro-fuzzy systems and evolutionary-fuzzy optimization, to enhance clustering, feature selection, and predictive analytics with improved accuracy and interpretability. Extensive experiments on real-world datasets from domains like healthcare and IoT demonstrate significant advancements in processing speed, resource utilization, and analytical efficiency over traditional methods. This study highlights the pivotal role of soft computing in unlocking the true potential of Big Data, enabling innovative solutions and driving meaningful advancements across industries.

Keywords

Big Data, Soft Computing, Fuzzy Logic, Neural Networks, Evolutionary Algorithms, Hybrid Systems, Predictive Analytics, Feature Selection, Clustering.


Integrating Machine Learning for Diamond Price Prediction and Distinguishing Natural Diamonds from Lab Grown: a Unified Approach

Hardev Ranglani, EXL Service Inc

ABSTRACT

Accurate pricing and authentication of diamonds are essential for ensuring market transparency and consumer confidence. This study applies machine learning (ML) techniques to predict diamond prices and classify diamonds as lab-grown or natural based on attributes such as cut, color, clarity, and carat weight, etc. A unified framework combining regression for price prediction and classification for origin determination is developed with robust model evaluation. The proposed models achieve a mean absolute error of $ 554.32 in price prediction and an F-1 score of 98.66 % in origin classification. The study also explores the varying linear relationship between the variables in predicting diamond price using local linear regression. These findings provide valuable insights for the gemstone industry, offering a practical and interpretable approach to automated diamond valuation and authentication.

Keywords

Diamond Price Prediction, Regression and Classification Modeling, Local Linear Regression, Lab-Grown vs Natural Diamonds, R2.


menu
Reach Us

emailadcom@acsty2025.org


emailadcomconf@yahoo.com

close