-
2022, Lilia Rejeb, Lamjed Ben Said
Computing driver tiredness and fatigue in automobile via eye tracking and body movements
Periodicals of Engineering and Natural Sciences (PEN), 10(1), 573. doi:10.21533/pen.v10i1.2705., 2022
Abstract
The aim of this paper is to classify the driver tiredness and fatigue in automobile via eye tracking and body movements using deep learning based Convolutional Neural Network (CNN) algorithm. Vehicle driver face localization serves as one of the most widely used real-world applications in fields like toll control, traffic accident scene analysis, and suspected vehicle tracking. The research proposed a CNN classifier for simultaneously localizing the region of human face and eye positioning. The classifier, rather than bounding rectangles, gives bounding quadrilaterals, which gives a more precise indication for vehicle driver face localization. The adjusted regions are preprocessed to remove noise and passed to the CNN classifier for real time processing. The preprocessing of the face features extracts connected components, filters them by size, and groups them into face expressions. The employed CNN is the well-known technology for human face recognition. One we aim to extract the facial landmarks from the frames, we will then leverage classification models and deep learning based convolutional neural networks that predict the state of the driver as ‘Alert’ or ‘Drowsy’ for each of the frames extracted. The CNN model could predict the output state labels (Alert/Drowsy) for each frame, but we wanted to take care of sequential image frames as that is extremely important while predicting the state of an individual. The process completes, if all regions have a sufficiently high score or a fixed number of retries are exhausted. The output consists of the detected human face type, the list of regions including the extracted mouth and eyes with recognition reliability through CNN with an accuracy of 98.57% with 100 epochs of training and testing.
Hana Mechria, ,Effect of Denoising on Performance of Deep Convolutional Neural Network For Mammogram Images Classification
KES, 2022
Abstract
Digital mammograms are an important imaging modality for breast cancer screening and diagnosis. Several types of noise appear in mammograms and make the job of detecting breast cancer even more challenging due to missing details in the information of the image.In this study, we analyze the effect of mammogram images quality on the performance of the Deep Convolutional Neural Network on a mammogram images classification task. Thus, our objective is to show how the classification accuracy varies with the application of a denoising step.Indeed, we investigated two different approaches to breast cancer detection. The first is the classification of the original mammo-gram images without being denoised, and the second is the classification of mammogram images that are denoised using a Deep Convolutional Neural Network, Wiener filter and Median filter. Therefore, the mammogram images are first denoised using each of the three denoising methods and then, classified into two classes: cancer and normal, using AlexNet, a pre-trained Deep Convo-lutional Neural Network in order to show whether the denoising method used is effective when grafted onto a Deep Convolutional Neural Network by measuring accuracy, sensitivity, and specificity.Interesting results are achieved where the DCNN denoising step improved the Deep Convolutional Neural Network classification task with an increase of 3.47% for overall accuracy, 5.34% for overall specificity, and 0.56% for overall sensitivity., Lilia Rejeb,Detecting physiological needs using deep inverse reinforcement learning
Applied Artificial Intelligence: AAI, 36(1), 1–25. doi:10.1080/08839514.2021.2022340, 2022
Abstract
Smart health-care assistants are designed to improve the comfort of the patient where smart refers to the ability to imitate the human intelligence to facilitate his life without, or with limited, human intervention. As a part of this, we are proposing a new Intelligent Communication Assistant capable of detecting physiological needs by following a new efficient Inverse Reinforcement learning algorithm designed to be able to deal with new time-recorded states. The latter processes the patient’s environment data, learns from the patient previous choices and becomes capable of suggesting the right action at the right time. In this paper, we took the case study of Locked-in Syndrome patients, studied their actual communication methods and tried to enhance the existing solutions by adding an intelligent layer. We showed that by using Deep Inverse Reinforcement Learning using Maximum Entropy, we can learn how to regress the reward amount of new states from the ambient environment recorded states. After that, we can suggest the highly rewarded need to the target patient. Also, we proposed a full architecture of the system by describing the pipeline of the information from the ambient environment to the different actors.
, , Lilia Rejeb, Lamjed Ben SaidAtipreta: An analytical model for time-dependent prediction of terrorist attacks
International Journal of Applied Mathematics and Computer Science (AMCS), 32(3), 495-510 . doi: 10.34768/amcs-2022-0036, 2022
Abstract
In counter-terrorism actions, commanders are confronted with difficult and important challenges. Their decision-making processes follow military instructions and must consider the humanitarian aspect of the mission. In this paper, we aim to respond to the question: What would the casualties be if governmental forces reacted in a given way with given resources? Within a similar context, decision-support systems are required due to the variety and complexity of modern attacks as well as the enormous quantity of information that must be treated in real time. The majority of mathematical models are not suitable for real-time events. Therefore, we propose an analytical model for a time-dependent prediction of terrorist attacks (ATiPreTA). The output of our model is consistent with casualty data from two important terrorist events known in Tunisia: Bardo and Sousse attacks. The sensitivity and experimental analyses show that the results are significant. Some operational insights are also discussed.
Saoussen Bel Haj Kacem, ,Unification of Imprecise Data: Translation of Fuzzy to Multi-Valued Knowledge Over Y-Axis
International Journal of Fuzzy System Applications (IJFSA), 2022, vol. 11, no 1, p. 1-27., 2022
Abstract
Inference systems are a well-defined technology derived from knowledge-based systems. Their main purpose is to model and manage knowledge as well as expert reasoning to insure a relevant decision making while getting close to human induction. Although handled knowledge are usually imperfect, they may be treated using a non classical logic as fuzzy logic or symbolic multi-valued logic. Nonetheless, it is required sometimes to consider both fuzzy and symbolic multi-valued knowledge within the same knowledge-based system. For that, we propose in this paper an approach that is able to standardize fuzzy and symbolic multi-valued knowledge. We intend to convert fuzzy knowledge into symbolic type by projecting them over the Y-axis of their membership functions. Consequently, it becomes feasible working under a symbolic multi-valued context. Our approach provides to the expert more flexibility in modeling their knowledge regardless of their type. A numerical study is provided to illustrate the potential application of the proposed methodology.
Yasmine Amor, Lilia Rejeb, Rahma Ferjani, Lamjed Ben Said,Hierarchical Multi-agent System for Sleep Stages Classification
International Journal on Artificial Intelligence Tools, 2022
Abstract
Sleep is a fundamental restorative process for human mental and physical health. Considering the risks that sleep disorders can present, sleep analysis is considered as a primordial task to identify the different abnormalities. Sleep scoring is the gold standard for human sleep analysis. The manual sleep scoring task is considered exhausting, subjective, time-consuming and error prone. Moreover, sleep scoring is based on fixed epoch lengths usually of 30 seconds, which leads to an information loss problem. In this paper, we propose an automatic unsupervised sleep scoring model. The aim of our work is to consider different epoch’s durations to classify sleep stages. Therefore, we developed a model based on Hierarchical Multi-Agent Systems (HMASs) that presents different layers where each layer contains a number of adaptive agents working with a specific time epoch. The effectiveness of our approach was investigated using real electroencephalography (EEG) data. Good results were reached according to a comparative study realized with the often used machine learning techniques for sleep stages classification problems.
, , Nader Kolsi,Effective Resource Utilization in Heterogeneous Hadoop Environment Through a Dynamic Inter-cluster and Intra-cluster Load Balancing
Asian Conference on Intelligent Information and Database Systems (ACIIDS), Ho Chi Minh City, Vietnam, part 2 669-681., 2022
Abstract
Apache Hadoop is one of the most popular distributed computing systems, used largely for big data analysis and processing. The Hadoop cluster hosts multiple parallel workloads requiring various resource usage (CPU, RAM, etc.). In practice, in heterogeneous Hadoop environments, resource-intensive tasks may be allocated to the lower performing nodes, causing load imbalance between and within clusters and and high data transfer cost. These weaknesses lead to performance deterioration of the Hadoop system and delays the completion of all submitted jobs. To overcome these challenges, this paper proposes an efficient and dynamic load balancing policy in a heterogeneous Hadoop YARN cluster. This novel load balancing model is based on clustering nodes into subgroups of nodes similar in performance, and then allocating different jobs in these subgroups using a multi-criteria ranking. This policy ensures the most accurate match between resource demands and available resources in real time, which decreases the data transfer in the cluster. The experimental results show that the introduced approach allows reducing noticeably the completion time s by 42% and 11% compared with the H-fair and a load balancing approach respectively. Thus, Hadoop can rapidly release the resources for the next job which enhance the overall performance of the distributed computing systems. The obtained finding also reveal that our approach optimizes the use of the available resources and avoids cluster over-load in real time.
, Lilia Rejeb, , Yasmine Amor, Lassaad Baati, Lamjed Ben SaidOptimal Electric Vehicles Route Planning with Traffic Flow Prediction and Real-Time Traffic Incidents
International Journal of Electrical and Computer Engineering Research, 2022
Abstract
Electric Vehicles (EVs) are regarded to be among the most environmentally and economically efficient transportation solutions. However, barriers and range limitations hinder this technology’s progress and deployment. In this paper, we examine EV route planning to derive optimal routes considering energy consumption by analyzing historical trajectory data. More specifically, we propose a novel approach for EV route planning that considers real-time traffic incidents, road topology, charging station locations during battery failure, and finally, traffic flow prediction extracted from historical trajectory data to generate energy maps. Our approach consists of four phases: the off-line phase which aims to build the energy graph, the application of the A* algorithm to deliver the optimal EV path, the NEAT trajectory clustering which aims to produce dense trajectory clusters for a given period of the day, and finally, the on-line phase based on our algorithm to plan an optimal EV path based on real traffic incidents, dense trajectory clusters, road topology information, vehicle characteristics, and charging station locations. We set up experiments on real cases to establish the optimal route for electric cars, demonstrating the effectiveness and efficiency of our proposed algorithm.Anouer Bennajeh, Lamjed Ben SaidAutonomous agent adaptive driving control based on fuzzy logic theory and normative behavior
This work presents an adaptive driving model using fuzzy logic and software agents to imitate human car-following behavior. Validated with the Federal Highway Administration dataset, the model shows high similarity with human trajectories., 2022
Abstract
Studying driver behaviors has become a major concern for the transportation community, businesses, and the public. Thus, based on the simulation, we proposed an adaptive driving model in the car-following driving behavior and based on the normative behavior of the driver during decision-making and anticipation, whose intention is to ensure the objectives of imitation of ordinary human behavior and road safety. The presented model is based on a software agent paradigm to model a human driver and the Fuzzy Logic Theory to reflect the driver agent’s reasoning. To validate our model, we used the dataset from the program of the US Federal Highway Administration. In this context, we notice an excellent homogeneity in the deviation of the adopted trajectory of the autonomous driver agent from the adopted trajectories by the human drivers. Moreover, the advantage of our model is that it works with different velocities.
, ,Predictive service placement in cloud using deep learning and frequent subgraph mining
Ambient Intelligence and Humanized Computing, 2022
Abstract
Over the last few years, service placement has become a strategic and fundamental management operation that allows cloud providers to deploy and arrange their services on the high-performance computation/storage servers, while taking various constraints (e.g., resource usage, security levels, data transfer time, SLA) into consideration. Despite the huge number of service placement schemes, most of them are static and do not take the cloud changes into account. To cope with this issue, predicting the cloud zones’ performance and availability should precede the placement task. For this purpose, we adopt gated recurrent neural network as a deep learning variant that allows forecasting the next short-term resource consumption on cloud servers and predicting the future service migration traffic between them. Also, to place cloud services’ application/data components on the optimum cloud zones, the frequently used high-performance servers are selected by mining the graph-like placement history, i.e. previous placement plans. To do so, we propose a Frequent Subgraph Mining algorithm that is reinforced with a tuning method to increase the probability of executing the past placement schemes. Experimental results have proved that our predictive approach outperforms state-of-the-art placement schemes in terms of performance and prediction quality.


