Publications

  • 2019
    Mouna Belhaj, Hanen Lejmi, Lamjed Ben Said

    Studying emotions at work using agent-based modeling and simulation

    In IFIP international conference on artificial intelligence applications and innovations (pp. 571-583). Cham: Springer International Publishing., 2019

    Abstract

    Emotions in workplace is a topic that has increasingly at
    tracted attention of both organizational practitioners and academics. This is due to the fundamental role emotions play in shaping human resources behaviors, performance, productivity, interpersonal relationships and engagement at work. In the current research, a computational social simulation approach is adopted to replicate and study the emotional experiences of employees in organizations. More speci cally, an emotional
    agent-based model of an employee at work is proposed. The developed model is used in a computer simulator WEMOS (Workers EMotions in Organizations Simulator) to conduct certain analyzes in relation to the most likely emotions-evoking stimuli as well as the emotional content of several work-related stimuli. Simulation results can be employed to gain deeper understanding about emotions in the work life.

    Maha Elarbi, Slim Bechikh, Carlos Artemio Coello Coello, Mohamed Makhlouf, Lamjed Ben Said

    Approximating complex Pareto fronts with predefined normal-boundary intersection directions

    IEEE Transactions on Evolutionary Computation, 24(5), 809-823, 2019

    Abstract

    Decomposition-based evolutionary algorithms using predefined reference points have shown good performance in many-objective optimization. Unfortunately, almost all experimental studies have focused on problems having regular Pareto fronts (PFs). Recently, it has been shown that the performance of such algorithms is deteriorated when facing irregular PFs, such as degenerate, discontinuous, inverted, strongly convex, and/or strongly concave fronts. The main issue is that the predefined reference points may not all intersect with the PF. Therefore, many researchers have proposed to update the reference points with the aim of adapting them to the discovered Pareto shape. Unfortunately, the adaptive update does not really solve the issue for two main reasons. On the one hand, there is a considerable difficulty to set the time and the frequency of updates. On the other hand, it is not easy to define how to update the search directions for an unknown PF shape. This article proposes to approximate irregular PFs using a set of predefined normal-boundary intersection (NBI) directions. The main motivation behind this article is that when using a set of well-distributed NBI directions, all these directions intersect with the PF regardless of its shape, except for the case of discontinuous and/or degenerate fronts. To handle the latter cases, a simple interaction mechanism between the decision maker (DM) and the algorithm is used. In fact, the DM is asked if the number of NBI directions needs to be increased in some stages of the evolutionary process. If so, the resolution of the NBI directions that intersect the PF is increased to properly cover discontinuous and/or degenerate PFs. Our experimental results on benchmark problems with regular and irregular PFs, having up to fifteen objectives, show the merits of our algorithm when compared to eight of the most representative state-of-the-art algorithms.

    Marwa Chabbouh, Slim Bechikh, Lamjed Ben Said, Chih-Cheng Hung

    Multi-objective evolution of oblique decision trees for imbalanced data binary classification

    Swarm Evol. Comput. 49: 1-22 (2019), 2019

    Abstract

    Imbalanced data classification is one of the most challenging problems in data mining. In this kind of problems, we have two types of classes: the majority class and the minority one. The former has a relatively high number of instances while the latter contains a much less number of instances. As most traditional classifiers usually assume that data is evenly distributed for all classes, they may considerably fail in recognizing instances in the minority class due to the imbalance problem. Several interesting approaches have been proposed to handle the class imbalance issue in the literature and the Oblique Decision Tree (ODT) is one of them. Nevertheless, most standard ODT construction algorithms use a greedy search process; while only very few works have addressed this induction problem using an evolutionary approach and this is done without really considering the class imbalance issue. To cope with this limitation, we propose in this paper a multi-objective evolutionary approach to find optimized ODTs for imbalanced binary classification. Our approach, called ODT-Θ-NSGA-III (ODT-based-Θ-Nondominated Sorting Genetic Algorithm-III), is motivated by its abilities: (a) to escape local optima in the ODT search space and (b) to maximize simultaneously both Precision and Recall. Thanks to these two features, ODT-Θ-NSGA-III provides competitive and better results when compared to many state-of-the-art classification algorithms on commonly used imbalanced benchmark data sets.
    Hana Mechria, Mohamed Salah Gouider, Khaled Hassine

    Breast Cancer Detection using Deep Convolutional Neural Network

    ICAART, 2019

    Abstract

    Deep Convolutional Neural Network (DCNN) is considered as a popular and powerful deep learning algorithm in image classification. However, there are not many DCNN applications used in medical imaging, because large dataset for medical images is not always available. In this paper, we present two DCNN architectures, a shallow DCNN and a pre-trained DCNN model: AlexNet, to detect breast cancer from 8000 mammographic images extracted from the Digital Database for Screening Mammography. In order to validate the performance of DCNN in breast cancer detection using a big data , we carried out a comparative study with a second deep learning algorithm Stacked AutoEncoders (SAE) in terms accuracy, sensitivity and specificity. The DCNN method achieved the best results with 89.23% of accuracy, 91.11% of sensitivity and 87.75% of specificity.

    Anouer Bennajeh, Slim Bechikh, Lamjed Ben Said, Samir Aknine

    Bi-level Decision-making Modeling for an Autonomous Driver Agent: Application in the Car-following Driving Behavior

    https://chatgpt.com/c/68d6582b-d000-8326-8778-077d05d845e8, 2019

    Abstract

    Road crashes are present as an epidemic in road traffic and continue to grow up, where, according to World Health Organization; they cause more than 1.24 million deaths each year and 20 to 50 million non-fatal injuries, so they should represent by 2020 the third leading global cause of illness and injury. In this context, we are interested in this paper to the car-following driving behavior problem, since it alone accounts for almost 70% of road accidents, which they are caused by the incorrect judgment of the driver to keep a safe distance. Thus, we propose in this paper a decision-making model based on bi-level modeling, whose objective is to ensure the integration between road safety and the reducing travel time. To ensure this objective, we used the fuzzy logic approach to model the anticipation concept in order to extract more unobservable data from the road environment. Furthermore, we used the fuzzy logic approach in order to model the driver behaviors, in particular, the normative behaviors. The experimental results indicate that the decision to increase in velocity based on our model is ensured in the context of respecting the road safety.

    Marwa Hammami, Slim Bechikh, Chih Cheng-Hung, Lamjed Ben Said

    Weighted- Feature Construction as a Bi-level Optimization Problem

    IEEE Congress on Evolutionary Computation. pp 1604-161., 2019

    Abstract

    Feature selection and construction are important
    pre-processing techniques in machine learning and data mining.
    They may allow not only dimensionality reduction but also
    classifier accuracy and efficiency improvement. Feature selection
    aims at selecting relevant features from the original feature set,
    which could be less informative to achieve good performance.
    Feature construction may work well as it creates new highlevel features, but these features do not have the same degree
    of importance, which makes the use of weighted-features construction a very challenging topic. In this paper, we propose a
    bi-level evolutionary approach for efficient feature selection and
    simultaneous feature construction and feature weighting, called
    Bi-level Weighted-Features Construction (BWFC). The basic idea
    of our BWFC is to exploit the bi-level model for performing
    feature selection and weighted-features construction with the
    aim of finding an optimal subset of features combinations. Our
    approach has been assessed on six high-dimensional datasets and
    compared against three existing approaches, using three different
    classifiers for accuracy evaluation. Experimental results show
    that our proposed algorithm gives competitive and better results
    with respect to the state-of-the-art algorithms

  • Ameni Hedhli, Haithem Mezni

    A DFA-based approach for the deployment of BPaaS fragments in the cloud

    Concurrency and Computation: Practice and Experience, 2018

    Abstract

    Cloud computing is an emerging technology that is largely adopted by the current computing industry. With the growing number of Cloud services, Cloud providers’ main focus is how to best offer efficient services (eg, SaaS, BPaaS, mobile services, etc) in order to hook the eventual customers. To meet this goal, services arrangement and placement in the cloud is becoming a serious problem because an optimal placement of these applications and their related data in accordance with the available resources can increase companies’ benefits. Since there is a widespread deployment of business processes in the cloud, the hereinafter conducted research works aim to enhance the business processes’ outsourcing by providing an optimized placement scheme that would attract cloud customers. In the light of these facts, the purpose of this paper is to deal with the BPaaS placement problem while optimizing both the total execution time and cloud resources’ usage. To do so, we first determine the redundant BPaaS fragments using a DNA Fragment Assembly technique. We apply a variant of the Genetic Algorithm to resolve it. Then, we propose a placement algorithm, which produces an optimized placement scheme on the basis of the determined fragments relations. We follow that by an implementation of the whole placement process and a set of experimental results that have shown the feasibility and efficiency of the proposed approach.

    Tarek Mahdhi, Haithem Mezni

    A prediction-Based VM consolidation approach in IaaS Cloud Data Centers

    Journal of Systems and Software, 2018

    Abstract

    Recent years have witnessed a rapid growth in exploiting Cloud environments to host and deliver various types of virtualized resources as on-demand services. In order to optimally use Cloud resources, the arrangement of virtual machines (VMs) in physical machines (PMs) must be performed strategically, because the placement of VMs in accordance with the available resources can reduce energy consumption, improve resource utilization and, consequently, can increase companies benefits. However, VMs could have time varying workloads, which leads to degradation of performance and power consumption. Thus, re-configuring the VMs placement is essential. Virtual machine consolidation aims to optimally use the available resources by allocating several virtual machines on a set of physical ones (PMs). To determine the PMs capacities to reallocate VMs, it is important to predict their states based on resource utilization history within each VM, and the past VMs migration traffic. However, a common limitation between existing VM consolidation approaches is the lack of information about the history of (and the future) VM migration traffic. Through this paper, we aim to propose a virtual machine consolidation approach based on the estimation of requested resources and the future VM migration traffic. We exploit the strength of Kernel Density Estimation technique (KDE) as a powerful mean to forecast the future resource usage of each VM, and AKKA toolkit as an actor-based model that allows exchanging useful information about the host’s states. We adopt a weighted-graph representation to model the history of migration traffic between PMs and to design the actor-based topology of the data center. The obtained results show the effectiveness of our approach in terms of total number of migrations and energy consumption.

    Haithem Mezni, Sofiane Ait Arab, Djamal Benslimane, Karim Benouaret

    An evolutionary clustering approach based on temporal aspects for context-aware service recommendation

    Journal of Ambient Intelligence and Humanized Computing, 2018

    Abstract

    Over the last years, recommendation techniques have emerged to cope with the challenging task of optimal service selection, and to help consumers satisfy their needs and preferences. However, most existing models on service recommendation only consider the traditional user-service relation, while in the real world, the perception and popularity of Web services may depend on several conditions including temporal, spatial and social constraints. Such additional factors in recommender systems influence users’ preferences to a large extent. In this paper, we propose a context-aware Web service recommendation approach with a specific focus on time dimension. First, K-means clustering method is hybridized with a multi-population variant of the well-known Particle Swarm Optimization (PSO) in order to exclude the less similar users which share few common Web services with the active user in specific contexts. Slope One method is, then, applied to predict the missing ratings in the current context of user. Finally, a recommendation algorithm is proposed in order to return the top-rated services. Experimental studies confirmed the accuracy of our recommendation approach when compared to three existing solutions.

    Haithem Mezni, Sabeur Aridhi, Allel Hadjali

    The uncertain cloud: State of the art and research challenges

    International Journal of Approximate Reasoning, 2018

    Abstract

    During the last decade, cloud computing became a natural choice to host and provide various computing resources as on-demand services. The correct characterization and management of cloud environment objects (clouds, data centers, providers, services, data, users, etc.) is the first step towards effective provisioning and integration of cloud services. However, cloud computing environment is often subject to uncertainty. This could be attributed to the incompleteness and imprecision of cloud available information, as well as the highly changing conditions. The purpose of this survey is to study, criticize and classify the already existing works that deal with uncertainty in the cloud. We present a taxonomy on the uncertainty in the cloud and we study how such concept was tackled by researchers in cloud environments. Finally, we identify the challenges and the requirements to deal with uncertain data in the cloud, as well as the future directions.