Dat Ngo, Korea National University of Transportation, South Korea
Digital signal processing (DSP) applications fundamentally rely on arithmetic operations such as addition, multiplication, and division. These operations present significant challenges when implemented on resource-constrained edge devices due to their complexity and high computational demands. The logarithmic number system offers a promising solution to mitigate these challenges by reducing delay and hardware area, although often at the cost of precision. This paper introduces an efficient method for computing binary logarithms and antilogarithms using uniformly segmented linear regression. In the proposed approach, the input signal is decomposed into integer and fractional components. Linear regression models are developed for computing logarithms and antilogarithms within equally spaced intervals of the fractional part, with the appropriated model selected based on a predefined number of the most significant bits of the fractional component. Comparative evaluations against benchmark techniques demonstrate the proposed method’s superiority in terms of computational accuracy, making it well-suited for DSP tasks in edge computing environments.
Digital signal processing, logarithmic number system, uniform segmentation, linear regression.
Kai Zhang1, Vyom Kumar2, Max Zhang3, 1Moreau Catholic High School, 2Moreau Catholic High School, 2Irvington High School, USA
In the history of computer science, the development of algorithms has been key to composing the many programs used everyday by civilians, such as programs for texting, entertainment, research, and anything on the internet [9]. However, all of these programs are expensive and time-intensive to create. Even if completing a singular project is not particularly difficult, the knowledge required to create that project might take years to learn. Since the development of AI, the process of fabricating algorithms has been shortened greatly. Anybody with internet access can make software using AI, but a novice in programming would still struggle with creating new code from a Pseudocode created by AI [10]. By utilizing the OpenAI API, we created InteractiveWeb, a program that takes input from a user, sends it to ChatGPT, and, after creating an initial model of that program, develops a new and improved version of that previous code. With this program, the AI could accurately develop instructed projects, allowing it to create the fully working games of Minesweeper, Connect 4, and even Chess.
AI, OpenAI, ChatGPT, Code Generation, Software Engineering using AI, AI-created Software
Albert Nossig, Tobias Hell, and Georg Moser, Department of Computer Science, University of Innsbruck, Data Lab Hell GmbH, Europastraße 2a, 6170 Zirl, Tyrol, Austria
In this paper, we present an innovative iterative approach to rule learning specifically designed for (but not limited to) text-based data. Our method focuses on progressively expanding the vocabulary utilized in each iteration resulting in a significant reduction of memory consumption. Moreover, we introduce a Value of Confidence as an indicator of the reliability of the generated rules. By leveraging the Value of Confidence, our approach ensures that only the most robust and trustworthy rules are retained, thereby improving the overall quality of the rule learning process. We demonstrate the effectiveness of our method through extensive experiments on various textual as well as non-textual datasets including a use case of significant interest to insurance industries, showcasing its potential for real-world applications.
Rule Learning, Explainable Artificial Intelligence, Text Categorization, Reliability of Rules.
Tianrui Zhang1, Ang Li2, 1School of Professional Studies, Northwestern University, 633 Clark St, Evanston, IL 60208, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840
Caffeine is the most widely consumed stimulant globally, yet its overconsumption poses significant health risks. Traditional methods for measuring caffeine content, such as weighing coffee, can be impractical in everyday settings. This paper proposes an innovative solution that leverages artificial intelligence (AI) and machine learning, specifically utilizing a Sequential Convolutional Neural Network (CNN), to predict caffeine levels based on image analysis of coffee. The system processes images to determine brightness, correlating this data with caffeine concentration on a defined scale. Challenges such as dataset selection, prediction accuracy variability, and training epoch limitations were addressed through data cleaning and iterative model training. Experiments revealed that the model achieves a high accuracy rate, indicating its potential as a practical tool for consumers aiming to monitor their caffeine intake. This application not only enhances user convenience but also promotes healthier consumption practices by providing a reliable method for estimating caffeine levels visually
Caffeine Level, Artificial Intelligence, Biology
Renan Colzani da Rocha, Alexia Silva da Silveira Araujo, Daniel Lima Lins de Vasconcellos, Letícia Debastiani Frana, Flavio Andaló and Milton Luiz Horn Vieira, Department of Management, Media and Technology, Federal University of Santa Catarina, Florianópolis, Brazil
This article seeks to analyze the theoretical bases that support the construction of a fun educational game, elaborating a structure for the creation of games through theoretical concepts studied from the perspective of psychology and its relationship with this type of media. In view of the growth in the consumption of electronic games as entertainment among children and young people in the twenty-first century, we thought about the construction of an educational electronic game that contains three elements that are considered part of what makes games fun, the challenge, the fantasy or narrative, and curiosity. Using as an example the use of historical narrative as an educational tool developed in the games of the Assassins Creed series, a project was created to develop an educational game set in Florianópolis in the 1950s, starring the artist and folklorist Franklin Cascaes, and based on the short stories published in the book The Fantastic on the Island of Santa Catarina. This article presents the results of the historical research carried out for the development of the games first map, as well as a summary of its narrative and mechanics.
Education, Flow, Franklin Cascaes, Electronic Games, Narrative.
Tianrui Zhang1, Ang Li2, 1School of Professional Studies, Northwestern University, 633 Clark St, Evanston, IL 60208, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840
Caffeine is the most widely consumed stimulant globally, yet its overconsumption poses significant health risks. Traditional methods for measuring caffeine content, such as weighing coffee, can be impractical in everyday settings. This paper proposes an innovative solution that leverages artificial intelligence (AI) and machine learning, specifically utilizing a Sequential Convolutional Neural Network (CNN), to predict caffeine levels based on image analysis of coffee. The system processes images to determine brightness, correlating this data with caffeine concentration on a defined scale. Challenges such as dataset selection, prediction accuracy variability, and training epoch limitations were addressed through data cleaning and iterative model training. Experiments revealed that the model achieves a high accuracy rate, indicating its potential as a practical tool for consumers aiming to monitor their caffeine intake. This application not only enhances user convenience but also promotes healthier consumption practices by providing a reliable method for estimating caffeine levels visually
Caffeine Level, Artificial Intelligence, Biology
Bohong Tian1, David Garcia21CS Department, Irvine Valley College, 5500 Irvine Center Dr, Irvine, CA 92618, 2David T. Garcia, University of California, Los Angeles, CA 90095
In this paper, we presents the development of a stock and ETF forecasting mobile platform using machine learning and deep learning algorithms. The platform collects historical stock data using web scraping techniques (Beautiful Soup) from sources such as Yahoo Finance and Google Finance. We applied Linear Regression, SVM, and LSTM models to predict stock trends and evaluated their performance based on accuracy and efficiency.The experiments showed that while Linear Regression offers quick predictions, it lacks the ability to handle complex market behavior. SVM performed better for classifying trend directions but struggled during periods of volatility. LSTM networks, though computationally intensive, provided the highest accuracy by capturing sequential patterns in the data. This multi-model approach ensures the platform offers investors both speed and precision, providing actionable insights. Future improvements will focus on incorporating hyperparameter optimization and sentiment analysis to further enhance predictive performance.
Stock, Trends Prediction, Machine Learning.