keyboard_arrow_up
Accepted Papers
Binary Logarithm and Antilogarithm Calculations Using Uniformly Segmented Linear Regression

Dat Ngo, Korea National University of Transportation, South Korea

ABSTRACT

Digital signal processing (DSP) applications fundamentally rely on arithmetic operations such as addition, multiplication, and division. These operations present significant challenges when implemented on resource-constrained edge devices due to their complexity and high computational demands. The logarithmic number system offers a promising solution to mitigate these challenges by reducing delay and hardware area, although often at the cost of precision. This paper introduces an efficient method for computing binary logarithms and antilogarithms using uniformly segmented linear regression. In the proposed approach, the input signal is decomposed into integer and fractional components. Linear regression models are developed for computing logarithms and antilogarithms within equally spaced intervals of the fractional part, with the appropriated model selected based on a predefined number of the most significant bits of the fractional component. Comparative evaluations against benchmark techniques demonstrate the proposed method’s superiority in terms of computational accuracy, making it well-suited for DSP tasks in edge computing environments.

Keywords

Digital signal processing, logarithmic number system, uniform segmentation, linear regression.


Interactive Web: Leveraging Ai-driven Code Generation to Simplify Algorithm Development for Novice Programmers in Web-based Applications

Kai Zhang1, Vyom Kumar2, Max Zhang3, 1Moreau Catholic High School, 2Moreau Catholic High School, 2Irvington High School, USA

ABSTRACT

In the history of computer science, the development of algorithms has been key to composing the many programs used everyday by civilians, such as programs for texting, entertainment, research, and anything on the internet [9]. However, all of these programs are expensive and time-intensive to create. Even if completing a singular project is not particularly difficult, the knowledge required to create that project might take years to learn. Since the development of AI, the process of fabricating algorithms has been shortened greatly. Anybody with internet access can make software using AI, but a novice in programming would still struggle with creating new code from a Pseudocode created by AI [10]. By utilizing the OpenAI API, we created InteractiveWeb, a program that takes input from a user, sends it to ChatGPT, and, after creating an initial model of that program, develops a new and improved version of that previous code. With this program, the AI could accurately develop instructed projects, allowing it to create the fully working games of Minesweeper, Connect 4, and even Chess.

Keywords

AI, OpenAI, ChatGPT, Code Generation, Software Engineering using AI, AI-created Software


Rule by Rule: Learning with Confidence Through Vocabulary Expansion

Albert Nossig, Tobias Hell, and Georg Moser, Department of Computer Science, University of Innsbruck, Data Lab Hell GmbH, Europastraße 2a, 6170 Zirl, Tyrol, Austria

ABSTRACT

In this paper, we present an innovative iterative approach to rule learning specifically designed for (but not limited to) text-based data. Our method focuses on progressively expanding the vocabulary utilized in each iteration resulting in a significant reduction of memory consumption. Moreover, we introduce a Value of Confidence as an indicator of the reliability of the generated rules. By leveraging the Value of Confidence, our approach ensures that only the most robust and trustworthy rules are retained, thereby improving the overall quality of the rule learning process. We demonstrate the effectiveness of our method through extensive experiments on various textual as well as non-textual datasets including a use case of significant interest to insurance industries, showcasing its potential for real-world applications.

Keywords

Rule Learning, Explainable Artificial Intelligence, Text Categorization, Reliability of Rules.


Beyond Specialization: Assessing the Capabilities of MLLMs in Age and Gender Estimation

Maksim Kuprashevich, Grigorii Alekseenko, and Irina Tolstykh, R&D Department, Salutedev, Uzbekistan

ABSTRACT

Multimodal Large Language Models (MLLMs) have recently gained immense popularity. Powerful commercial models like ChatGPT and Gemini, as well as open-source ones such as LLaVA, are essentially general-purpose models and are applied to solve a wide variety of tasks, including those in computer vision. These neural networks possess such strong general knowledge and reasoning abilities that they have proven capable of working even on tasks for which they were not specifically trained. We compared the capabilities of the most powerful MLLMs to date: ShareGPT4V, ChatGPT-4V/4O, LLaVA-Next in a specialized task of age and gender estimation with state-of-the-art specialized model. This comparison has yielded some interesting results and insights about the strengths and weaknesses of the participating models. Furthermore, we attempted various ways to fine-tune the ShareGPT4V model for this specific task, aiming to achieve state-of-the-art results in this particular challenge. Although such a model would not be practical in production, as it is incredibly expensive compared to a specialized model, it could be very useful in some tasks, like data annotation.

Keywords

MLLM, VLM, Human Attribute Recognition, Age estimation, Gender estimation, Large Models Generalization


A Smart Caffeine Level Predicting and Analysis Solution With Sequential Machine Learning Model Using Artificial Intelligence and Computer Vision

Tianrui Zhang1, Ang Li2, 1School of Professional Studies, Northwestern University, 633 Clark St, Evanston, IL 60208, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840

ABSTRACT

Caffeine is the most widely consumed stimulant globally, yet its overconsumption poses significant health risks. Traditional methods for measuring caffeine content, such as weighing coffee, can be impractical in everyday settings. This paper proposes an innovative solution that leverages artificial intelligence (AI) and machine learning, specifically utilizing a Sequential Convolutional Neural Network (CNN), to predict caffeine levels based on image analysis of coffee. The system processes images to determine brightness, correlating this data with caffeine concentration on a defined scale. Challenges such as dataset selection, prediction accuracy variability, and training epoch limitations were addressed through data cleaning and iterative model training. Experiments revealed that the model achieves a high accuracy rate, indicating its potential as a practical tool for consumers aiming to monitor their caffeine intake. This application not only enhances user convenience but also promotes healthier consumption practices by providing a reliable method for estimating caffeine levels visually

Keywords

Caffeine Level, Artificial Intelligence, Biology


Electronic Games in Education: Use of Media as a Pedagogical Resource

Renan Colzani da Rocha, Alexia Silva da Silveira Araujo, Daniel Lima Lins de Vasconcellos, Letícia Debastiani Frana, Flavio Andaló and Milton Luiz Horn Vieira, Department of Management, Media and Technology, Federal University of Santa Catarina, Florianópolis, Brazil

ABSTRACT

This article seeks to analyze the theoretical bases that support the construction of a fun educational game, elaborating a structure for the creation of games through theoretical concepts studied from the perspective of psychology and its relationship with this type of media. In view of the growth in the consumption of electronic games as entertainment among children and young people in the twenty-first century, we thought about the construction of an educational electronic game that contains three elements that are considered part of what makes games fun, the challenge, the fantasy or narrative, and curiosity. Using as an example the use of historical narrative as an educational tool developed in the games of the Assassins Creed series, a project was created to develop an educational game set in Florianópolis in the 1950s, starring the artist and folklorist Franklin Cascaes, and based on the short stories published in the book The Fantastic on the Island of Santa Catarina. This article presents the results of the historical research carried out for the development of the games first map, as well as a summary of its narrative and mechanics.

Keywords

Education, Flow, Franklin Cascaes, Electronic Games, Narrative.


A Stock and ETF Value and Trends Prediction Method with Mobile Platform using Machine Learning and Data Mining

Tianrui Zhang1, Ang Li2, 1School of Professional Studies, Northwestern University, 633 Clark St, Evanston, IL 60208, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840

ABSTRACT

Caffeine is the most widely consumed stimulant globally, yet its overconsumption poses significant health risks. Traditional methods for measuring caffeine content, such as weighing coffee, can be impractical in everyday settings. This paper proposes an innovative solution that leverages artificial intelligence (AI) and machine learning, specifically utilizing a Sequential Convolutional Neural Network (CNN), to predict caffeine levels based on image analysis of coffee. The system processes images to determine brightness, correlating this data with caffeine concentration on a defined scale. Challenges such as dataset selection, prediction accuracy variability, and training epoch limitations were addressed through data cleaning and iterative model training. Experiments revealed that the model achieves a high accuracy rate, indicating its potential as a practical tool for consumers aiming to monitor their caffeine intake. This application not only enhances user convenience but also promotes healthier consumption practices by providing a reliable method for estimating caffeine levels visually

Keywords

Caffeine Level, Artificial Intelligence, Biology


Electronic Games in Education: Use of Media as a Pedagogical Resource

Bohong Tian1, David Garcia21CS Department, Irvine Valley College, 5500 Irvine Center Dr, Irvine, CA 92618, 2David T. Garcia, University of California, Los Angeles, CA 90095

ABSTRACT

In this paper, we presents the development of a stock and ETF forecasting mobile platform using machine learning and deep learning algorithms. The platform collects historical stock data using web scraping techniques (Beautiful Soup) from sources such as Yahoo Finance and Google Finance. We applied Linear Regression, SVM, and LSTM models to predict stock trends and evaluated their performance based on accuracy and efficiency.The experiments showed that while Linear Regression offers quick predictions, it lacks the ability to handle complex market behavior. SVM performed better for classifying trend directions but struggled during periods of volatility. LSTM networks, though computationally intensive, provided the highest accuracy by capturing sequential patterns in the data. This multi-model approach ensures the platform offers investors both speed and precision, providing actionable insights. Future improvements will focus on incorporating hyperparameter optimization and sentiment analysis to further enhance predictive performance.

Keywords

Stock, Trends Prediction, Machine Learning.


Deep Reinforcement Learning with Pairwise Modeling for the Weapon-target Assignment Problem

Valentin Colliard, Alain Peres, and Vincent Corruble,Sorbonne Universit´e, CNRS, LIP6 Thales LAS France

ABSTRACT

In this paper, we introduce two deep reinforcement learning approaches designed to tackle the challenges of air defense systems. StarCraft II has been used as a game environment to create attack scenarios where the agent must learn to defend its assets and points of interest against aerial units. Our agent, by estimating a value for each weapon-target pair, shows an ability to be robust in multiple scenarios to truly distinguish and prioritize targets in order to protect its units. These two methods, one using multi- layer perceptrons and the other using the attention mechanism, are compared with rule-based algorithms. Through empirical evaluation, we validate their efficacy in achieving resilient defense trategies across diverse and dynamic environments.

Keywords

Deep Reinforcement Learning, Weapon-Target Assignment, Simulation


menu
Reach Us

emailmlsc@acsty2025.org


emailmlsc.conf@yahoo.com

close