-
2021Ameni Hedhli, Haithem Mezni
A survey of service placement in cloud environments
Journal of Grid Computing, 19(3), 23., 2021
Abstract
Cloud computing is largely adopted by the current computing industry. Not only users can benefit from cloud scalability, but also businesses are more and more attracted by its flexibility. In addition, the number of offered cloud services (e.g., SaaS, BPaaS, mobile services, etc.) is continuously growing. This raises a question about how to effectively arrange and place them in the cloud, in order to offer high-performance services. Indeed, companies’ and providers’ benefits are strongly related to the optimal placement and management of cloud services, together with their related data. This produces various challenges, including the heterogeneity and dynamicity of hosting cloud zones, the cloud/service –specific placement constraints, etc. Recent cloud service placement approaches have dealt with these issues through different techniques, and by fixing various criteria to optimize. Moreover, researchers have considered other specificities, like the cloud environment type, the deployment model and the placement mode. This paper provides a comprehensive survey on service placement schemes in the cloud. We also identify the current challenges for different cloud service models and environments, and we provide our future directions.
Anouer Bennajeh, Lamjed Ben SaidDriving control based on bilevel optimization and fuzzy logic
This paper addresses the car-following driving control problem using a bilevel optimization approach that considers both leader and follower behaviors. A fuzzy logic-based model is proposed to capture nonnormative driver behavior, validated with real traf, 2021
Abstract
Driving control in the car-following (CF) driving behavior has two aspects. First, in what measure an approximation distance is taken as a safe distance guaranteeing the safety of the follower drivers. Second, how to control the follower’s vehicle velocities based on the stimulus of the leading vehicle. In this context, to resolve the driving control problem in the CF driving behavior, a bilevel optimization is presented in this paper, based on the behaviors of the follower and leader drivers. Bearing in mind that mathematics has contributed to the imitation of human behaviors, they are now reaching a level of complexity requiring the entry on the scene of a new player, which is artificial intelligence. Thus, in this paper; we used the fuzzy logic theory for modeling a follower driver with a nonnormative behavior. To validate our model, we used a data set from the program of the US Federal Highway Administration. Therefore, according to the experimental results, there is homogeneity between the actual and the simulated travel trajectories in terms of deviation. Besides, the driver’s behavior adopted (normative or nonnormative) is reflected in his reactions to the various components of the road.
-
2020, ,
On the use of big data frameworks for big service composition
Journal of Network and Computer Applications, 2020
Abstract
Over the last years, big data has emerged as a new paradigm for the processing and analysis of massive volumes of data. Big data processing has been combined with service and cloud computing, leading to a new class of services called “Big Services”. In this new model, services can be seen as an abstract layer that hides the complexity of the processed big data. To meet users’ complex and heterogeneous needs in the era of big data, service reuse is a natural and efficient means that helps orchestrating available services’ operations, to provide customer on-demand big services. However different from traditional Web service composition, composing big services refers to the reuse of, not only existing high-quality services, but also high-quality data sources, while taking into account their security constraints (e.g., data provenance, threat level and data leakage). Moreover, composing heterogeneous and large-scale data-centric services faces several challenges, apart from security risks, such as the big services’ high execution time and the incompatibility between providers’ policies across multiple domains and clouds. Aiming to solve the above issues, we propose a scalable approach for big service composition, which considers not only the quality of reused services (QoS), but also the quality of their consumed data sources (QoD). Since the correct representation of big services requirements is the first step towards an effective composition, we first propose a quality model for big services and we quantify the data breaches using L-Severity metrics. Then to facilitate processing and mining big services’ related information during composition, we exploit the strong mathematical foundation of fuzzy Relational Concept Analysis (fuzzy RCA) to build the big services’ repository as a lattice family. We also used fuzzy RCA to cluster services and data sources based on various criteria, including their quality levels, their domains, and the relationships between them. Finally, we define algorithms that parse the lattice family to select and compose high-quality and secure big services in a parallel fashion. The proposed method, which is implemented on top of Spark big data framework, is compared with two existing approaches, and experimental studies proved the effectiveness of our big service composition approach in terms of QoD-aware composition, scalability, and security breaches.
,Towards big services composition
Web and Grid Services, 2020
Abstract
Recently, cloud computing has been combined with big data processing leading to a new model of services called big services. This model addresses the customers’ complex requirements by reusing and aggregating existing services from various domains and delivery models, and from multiple cloud availability zones. Existing web/cloud service composition approaches are not adequate for the big service context due to many reasons, including the large volume of data, the cross-domain and cross-cloud interoperability issues, etc. Considering the aforementioned facts, we provide a solution to the big service composition issue, by taking advantage of relational concept analysis (RCA), as a clustering method, and composite particle swarm optimisation (CPSO), as an optimisation technique. RCA is used to model the big service environment, whereas CPSO helps continuously optimising the quality of big service composition. The implementation and experimental studies on our approach have proven its feasibility and efficiency.
, ,Bringing semantics to multi-cloud service compositions
Software: Practice and Experience, 2020
Abstract
Over the last decade, cloud computing has emerged as a new paradigm for delivering various on-demand virtualized resources as services. Cloud services have inherited not only the major characteristics of web services but also their classical issues, in particular, the interoperability issues and the heterogeneous nature of their hosting environments. This latter problem must be taken into account when composing various cloud services, in order to answer users’ complex requirements. Moreover, leading cloud providers started to offer their services across multiple clouds. This adds a new factor of heterogeneity, as composition engines must take into consideration the heterogeneity not only at the service level (eg, service descriptions) but also at the cloud level (eg, pricing models, security policies). In this context, the semantics of multicloud actors must be incorporated into the multicloud service composition (MCSC) process. However, most existing approaches have treated the semantic service composition in traditional single-cloud environments. The few works in multicloud settings have ignored the semantics of cloud zones and resources. Moreover, they often focus on the general aspect of MCSC (eg, horizontal or vertical compositions). Even the few researchers who have addressed both vertical and horizontal service compositions, conducted their research studies in the context of single- cloud environments, which were proven to be unrealistic and offer limited quality of service (QoS) and security support. To ensure a high interoperability when composing services from multiple heterogeneous clouds and to enable a horizontal/vertical semantic service compositions, we take advantage of a standardized and semantically enriched generic service description, including all aspects (technical, operational, business, semantic, contextual) and supporting different cloud service models (SaaS, PaaS, IaaS, etc). We also incorporate Semantic Web Rule Language into the MCSC process to enable not only rule-based reasoning about various composition constraints (eg, QoS constraints, cloud zones constraints) but also to provide accurate semantic matching of cloud services’ capabilities. Conducted experiments have proven the ability of our approach to combine high-quality services from the optimal number of clouds.
Rahma Ferjani, Lilia Rejeb, Lamjed Ben SaidCooperative Reinforcement Multi-Agent Learning System for Sleep Stages Classification
2020 International Multi-Conference on Organization of Knowledge and Advanced Technologies (OCTA), 2020
Abstract
Sleep analysis is considered as an important process in sleep disorders identification and highly dependent of sleep scoring. Sleep scoring is a complex, time consuming and exhausting task for experts. In this paper, we propose an automatic sleep scoring model based on unsupervised learning to avoid the pre-labeling task. Taking advantage of the distributed nature of Multi-agent Systems (MAS), we propose a classification model based on various physiological signals coming from heterogeneous sources. The proposed model offers a totally cooperative learning to automatically score sleep into several stages based on unlabeled data. The existing heterogeneous adaptive agents are dealing with a dynamic environment of various physiological signals. The efficiency of our approach was investigated using real data. Promising results were reached according to a comparative study carried out with the often used classification models. The generic proposed model could be used in fields where data are coming from heterogeneous sources and classification rules are not predefined.
Rihab Said, Slim Bechikh, , Lamjed Ben SaidSolving Combinatorial Multi-Objective Bi-Level Optimization Problems Using Multiple Populations and Migration Schemes
IEEE Access, vol. 8, pp. 141674-141695, 2020
Abstract
Many decision making situations are characterized by a hierarchical structure where a lower-level (follower) optimization problem appears as a constraint of the upper-level (leader) one. Such kind of situations is usually modeled as a BLOP (Bi-Level Optimization Problem). The resolution of the latter usually has a heavy computational cost because the evaluation of a single upper-level solution requires finding its corresponding (near) optimal lower-level one. When several objectives are optimized in each level, the BLOP becomes a multi-objective task and more computationally costly as the optimum corresponds to a whole non-dominated solution set, called the PF (Pareto Front). Despite the considerable number of recent works in multi-objective evolutionary bi-level optimization, the number of methods that could be applied to the combinatorial (discrete) case is much reduced. Motivated by this observation, we propose in this paper an Indicator-Based version of our recently proposed Co-Evolutionary Migration-Based Algorithm (CEMBA), that we name IB-CEMBA, to solve combinatorial multi-objective BLOPs. The indicator-based search choice is justified by two arguments. On the one hand, it allows selecting the solution having the maximal marginal contribution in terms of the performance indicator from the lower-level PF. On the other hand, it encourages both convergence and diversity at the upper-level. The comparative experimental study reveals the outperformance of IB-CEMBA on a multi-objective bi-level production-distribution problem. From the effectiveness viewpoint, the upper-level hyper-volume values and inverted generational distance ones vary in the intervals [0.8500, 0.9710] and [0.0072, 0.2420], respectively. From the efficiency viewpoint, IB-CEMBA has a good reduction rate of the Number of Function Evaluations (NFEs), lying in the interval [30.13%, 54.09%]. To further show the versatility of our algorithm, we have developed a case study in machine learning, and more specifically we have addressed the bi-level multi-objective feature construction problem.
, Rahma Dhaouadi,Knowledge Deduction and Reuse Application to the Products’ Design Process
Int. J. Softw. Eng. Knowl. Eng. 30(2): 217-237, 2020
Abstract
In this paper, we introduce a framework for knowledge reuse and deduction in mechanical
products design and development. The proposed system e®ectively exploits the capitalized and
inferred knowledge. To this end, we settled up an ontology dealing with the design process of
mechanical products such as \the car ». The ontology-based framework is supported by a
software tool that brings an automatic and personalized assistance to correspondent actors
using the deduction process. Indeed, the systems provides the relevant knowledge to the suitable
users in order to facilitate their professional tasks considering their roles and collaboration.
Experimental results have demonstrated the e®ectiveness of reusing knowledge during product
development lifecycle.Sofian Boutaib, Slim Bechikh, , Maha Elarbi, , Lamjed Ben SaidCode smell detection and identification in imbalanced environments
Expert Systems with Applications, 2020
Abstract
Code smells are sub-optimal design choices that could lower software maintainability. Previous literature did not consider an important characteristic of the smell detection problem, namely data imbalance. When considering a high number of code smell types, the number of smelly classes is likely to largely exceed the number of non-smelly ones, and vice versa. Moreover, most studies did address the smell identification problem, which is more likely to present a higher imbalance as the number of smelly classes is relatively much less than the number of non-smelly ones. Furthermore, an additional research gap in the literature consists in the fact that the number of smell type identification methods is very small compared to the detection ones. The main challenges in smell detection and identification in an imbalanced environment are: (1) the structuring of the smell detector that should be able to deal with complex splitting boundaries and small disjuncts, (2) the design of the detector quality evaluation function that should take into account data imbalance, and (3) the efficient search for effective software metrics’ thresholds that should well characterize the different smells. Furthermore, the number of smell type identification methods is very small compared to the detection ones. We propose ADIODE, an effective search-based engine that is able to deal with all the above-described challenges not only for the smell detection case but also for the identification one. Indeed, ADIODE is an EA (Evolutionary Algorithm) that evolves a population of detectors encoded as ODTs (Oblique Decision Trees) using the F-measure as a fitness function. This allows ADIODE to efficiently approximate globally-optimal detectors with effective oblique splitting hyper-planes and metrics’ thresholds. We note that to build the BE, each software class is parsed using a particular tool with the aim to extract its metrics’ values, based on which the considered class is labeled by means of a set of existing advisors; which could be seen as a two-step construction process. A comparative experimental study on six open-source software systems demonstrates the merits and the outperformance of our approach compared to four of the most representative and prominent baseline techniques available in literature. The detection results show that the F-measure of ADIODE ranges between 91.23 % and 95.24 %, and its AUC lies between 0.9273 and 0.9573. Similarly, the identification results indicate that the F-measure of ADIODE varies between 86.26 % and 94.5 %, and its AUC is between 0.8653 and 0.9531.Mouna Karaja, Meriem Ennigrou, Lamjed Ben SaidBudget-constrained dynamic Bag-of-Tasks Scheduling algorithm for heterogeneous multi-cloud environment
OCTA International Multi-Conference, Information Systems and Economic Intelligence (SIIE), 2020
Abstract
Cloud computing has reached huge popularity for delivering on-demand services on a pay-per-use basis over the internet. However, since the number of cloud users evolves, multi-cloud environment has been introduced where clouds are interconnected in order to satisfy customers’ requirements. Task scheduling in such environments is very challenging mainly due to the heterogeneity of resources. In this paper, a budget-constrained dynamic Bag-of-Tasks scheduling algorithm for heterogeneous multi-cloud environment is proposed. By performing experiments on synthetic data sets that we propose, we demonstrate the effectiveness of the algorithm in terms of makespan.


