Thesis Title:
Active Data Mining using Agent
Abstract:
Thesis Title:
Active Data Mining using Agent
Abstract:
Thesis Title:
Presenting a Search Algorithm Based on Population-based Methods in Combinatorial Optimization
Abstract:
Ant Colony Optimization (ACO) is a metaheuristic method that inspired by the behavior of real ant colonies. In this paper, we propose a hybrid ACO algorithm for solving vehicle routing problem (VRP) heuristically in combination with an exact Algorithm to improve both the performance of the algorithm and the quality of solutions. In the basic VRP, geographically scattered customers of known demand are supplied from a single depot by a fleet of identically capacitated vehicles which are subject to architecture weight limit and, in some cases, to a limit on the distance traveled. Only one vehicle is allowed to supply each customer. The objective is to design least cost routes for the vehicles to service the customers.
The intuition of the proposed algorithm is that nodes which are near to each other will probably belong to the same branch of the minimum spanning tree of the problem graph and thus will probably belong to the same route in VRP. In the proposed algorithm, in each iteration, we first apply a modified implementation of Prim’s algorithm to the graph of the problem to obtain a feasible minimum spanning tree (MST) solution. Given a clustering of client nodes, the solution is to find a route in these clusters by using ACO with a modified version of transition rule of the ants. At the end of each iteration, ACO tries to improve the quality of solutions by using a local search algorithm, and update the associated weights of the graph arcs.
Thesis Title:
Cooperative Mobile Agents Recovery in Execution Time by Improvement of Message Complexity
Abstract:
Thesis Title:
Improving Website User Model Automatically Using Semantics with Domain Specific Concepts
Abstract:
Information overload is a major problem in the current World Wide Web. To tackle this problem, web personalization systems have been proposed that adapt the contents and services of a website to individual users according to their interests and navigational behaviors. A major component in any web personalization system is its user model. The content of the pages in a website can be utilized in order to create a more precise user model, but keyword based approaches lack a deep insight of the website. Recently a number of researches have been done to incorporate semantics of a website in representation of its users. All of these efforts use either a specific manually constructed taxonomy or ontology or a general purpose one like WordNet to map page views into semantic elements. However, building a hierarchy of concepts manually is time consuming and expensive. On the other hand, general purpose resources suffer from low coverage of domain specific terms. In this thesis we intend to address both these shortcomings. Our main contribution is that we introduce a mechanism to automatically improve the representation of the user in the website using a comprehensive lexical semantic resource. We utilize Wikipedia, the largest encyclopedia to date, as a rich lexical resource to enhance the automatic construction of vector model representation of user interests. The proposed architecture consists of a number of components namely basic log preprocessing, website domain concept extraction, website keyword extraction, keyword vector builder and keyword to concept mapping. Another important contribution is using the structure of the website to automatically narrow down domain specific concepts. Finally the last contribution is a new keyword to concept mapping method. Our evaluations show that the proposed method along with its comprehensive lexical resource represents users more effectively than keyword based approaches and WordNet based approaches.
Thesis Title:
Hybrid Method to Improve Text Summarization
Abstract:
Thesis Title:
A New Approach for Tracing Quality Attributes in Service Oriented Architecture
Abstract:
The dynamic nature of service-oriented architecture (SOA) makes it different from the other software architectures. A SOA system consists of a number of independent distributed services, which can be disjointed or replaces with a better choice in the core software dynamically at runtime. In other word, the architecture of such softwares changes continuously at runtime. While the links between our main software and other services are created and destroyed at runtime, many of quality attributes may change noticeably.
Based on this dynamic behavior, quality issues become more important and complicated in service-oriented softwares, and need a different approach based on the special characteristics of this type of architecture. From the multiple researches that have been carried out in the field of Quality in SOA, the approach of most of them is around Quality of Service (QoS). Beside on many important issues in this field, like quality of service, it seems that the overall quality of a software which consists of multiple independent services has a great importance.
The overall or general quality of a service-oriented architecture is a consequent of the qualities of the services and components that constitute the architecture, each of them is built by different vendors and with different qualities. On the other hand, the presence of these services and components in the system is not permanent and they may disconnect from the architecture when required or replaced by a better choice at runtime. Therefore, the quality of a service-oriented architecture will have much turbulence at runtime.
In this thesis, a new method is introduced for tracing the changes that may occur in quality attributes of a service-oriented software at runtime, by means of a formal model called graph transformation systems (GTS). Using this method, it can be possible to recalculate the software overall quality, as the architecture alters at runtime connecting to a service or disconnecting from it. This method can be used in SOA quality management and change management. On the other hand, it can be useful in the service selection process, as a method to predict the quality state of service-oriented softwares
Thesis Title:
Analysis and design of the evaluating process of data mining system architectures
Abstract:
Nowadays, data mining systems are widely used to discover knowledge from large distributed repositories. These systems require specific functionalities as well as quality attributes different from traditional software systems. Some of these requirements are extensibility, integrity, supporting high dimensional data, flexibility, privacy preserving, distributability, customizability, Transparency, Supporting large amount of data, Fault tolerance, and portability. Fulfilling these quality attributes relies vitally upon developing a well-suited architecture and a method to evaluate it. Consequently, the more successful we are in pursuing the requirements, the less cost we pay to repair and maintain it.
In this thesis, we propose a method to evaluate data mining systems’ architecture. We extracted the criteria needed to evaluate data mining systems’ architecture, improving and adapting the best software architecture evaluatation method (ATAM). To implement our method, called DMATAM, we broadly used modeling and measurement tools as well as data mining system criteria. Furthermore, we analyzed our method using a framework to choose evaluation architecture methods. Consequently, our proposed method was chosen as an appropriate method to evaluate data mining system architecture.
Thesis Title:
Automatic Evaluation of Machine Translation, Enhancement of N-gram Based Methods
Abstract:
Since machine translation became a widespread technology, the evaluation of machine translation is so critical. Human evaluation of machine translation is expensive and time-consuming. Automatic evaluation metrics can be a good substitute as they are fast and cheap.
Considerable number of automatic machine translation evaluation has developed since 1990s. However not all of them are practical. In 21st century some successful and practical methods appeared.
Most of the machine translation evaluation metrics are based on string similarities. However some of them use machine learning approach.
Bleu is one of the most popular metrics for machine translation evaluation. This thesis extends Bleu by assigning proper weights to N-grams. A word’s part of sentence which can be obtained via its pars tree is used to calculate the weight. The arithmetic mean and harmonic mean of pars tree components are adopted to estimate the weight of an N-gram. Like Bleu method, this method is language independent and simple. This thesis addresses several weaknesses of Bleu method and brings up adequate solutions for some of them to propose a method which correlates better with human judgment
Experimental results indicate that this method brings a higher correlation with human judgments than original Bleu.
Thesis Title:
Using Aspect-Oriented Approach in Modeling and Evaluation of Non-functional Requirements in Web-based Systems Design
Abstract:
Today, web-based systems provide various functionality and content for a wide variety of end users. Considering our increasing dependence on web applications and their intricate features and complex functional requirements, quality of such systems has become a critical factor for their success. Hence, there is now legitimate and growing concern about the manner in which web-based systems are developed and their long-term quality and integrity. In order to be successful, a web application must be systematically developed in terms of both its functional and non-functional requirements. Web engineering, an emerging new discipline, is the establishment and use of sound scientific, engineering and management principles and disciplined and systematic approaches to the successful development, deployment and maintenance of high quality Web-based systems and applications. In this research, by adopting and examining an industrial case study, we found that conventional web engineering approaches are merely driven by their functional requirements and decisions about acquiring the quality of these functional concerns are not explicitly and systematically made until the implementation and maintenance phases. We found that the realization of the infrastructure mechanisms for fulfilling nonfunctional requirements becomes tangled and scattered in functional modules. Therefore, as the system grows, its maintenance efforts dramatically increase. We extracted the crosscutting pattern of these non-functional requirements and used the concept of Aspect, which was first introduced in Aspect-Oriented Programming, in order to explicitly model and modularize them. Then we improved the development process of our industrial case study by aspectual injection of non-functional requirements to its analysis, design and implementation stages. Then, in order to describe the proposed software architecture, we surveyed well-known Aspect-Oriented Architecture Description Languages (AOADLs). Considering the result of this study, we extended UML 2.0 in order to describe non-functional concerns as architectural aspects. Then we tested our approach on Security and Performance as our candidate quality attributes in the case study. As long as the main contribution of the aspect-oriented approach is the improvement of modularity, we expected the proposed architecture to have less performance cost. In this regard, we evaluated our approach using Aspectual Software Architecture Analysis Method (ASAAM) and reported the result in the dissertation.
Thesis Title:
Agent based human computer interaction for blind persons
Abstract:
User satisfaction is an important problem in user interface design. The most available user interfaces are not proper for disabled peoples. Blind users can not simply access information like usual peoples through user interfaces. As an example usual users with a glance can differ between several information on the user interface according the positions, colors, and styles of these information, but a blind person who uses screen reader cannot see these visual features. The advertising links and additional information make these users confused. Therefore designing intelligent user interface that are autonomous, goal directed, dynamic and user interest based seems essential. These intelligent UI can increase task’s speed and decrease waste times. In this thesis, several interviews with blind users have done and their needs, tools, problems and requirements during web access discovered. Finally according these requirements an intelligent personalized search system designed and implemented. This system without any user’s effort implicitly learns user’s interest and represents them in a user profile.
Mainly the project outcomes are: 1) analyses an agent based approach for search personalization 2) create a framework for comparing available systems in this field 3) survey systems designed for blind persons.
Thesis Title:
Dynamic Ontology for Web Personalization
Abstract:
Thesis Title:
Mapping Business Process Modeling to Formal Model
Abstract:
Service Oriented Architecture (SOA) is a new paradigm for distributed computing which utilizes services to support the rapid, agile, and low-cost development of distributed applications. Web Services have been accepted as the best way to implement SOA. Several languages exist for defining new and more complex services or business processes which are implemented for example by means of web service composition. One of the most visible standards of existing approaches to build service composition in form of business process is Business Process Execution Language (BPEL).
BPEL is used to describe execution logic of Web services applications by defining their control flows and providing a way for partner services to share a common context. The ability to analysis functional properties and quality (such as reliability) prediction are two issues in a composite web service. Most of the approaches for specifying web service composition in form of business processes suffer of mentioned issues which is important for designers.
This work proposes an approach to predict the reliability of web service composition built on BPEL. The proposed approach first transforms business process specification into an appropriate modeling language, namely Petri net. It has been shown that Petri nets provide an appropriate foundation for performing static verification. We also predict the reliability of WS-BPEL process using Petri net model. The proposed method is applied in loan approval service as a case study.
Thesis Title:
Evaluation of Machine Translation Systems Architecture to Improve Hybrid Architecture
Abstract:
Less attention has been taken to intelligent systems and especially machine translation applications in terms of the production process which is based on software engineering. Machine translation is a kind of natural language processing application, in which a text is being taken as an input from a source language and a text equivalent to the destination language is produced. Machine translation is an open task, which means, we can produce many valid translations from different combinations of words. Because of costs and resource expenses, it is better to evaluate the structural quality of machine translation systems during analysis and design phase. Current machine translation evaluation techniques’ attention is much focused on the quality of produced sentences rather than the structure of the system. Quality attributes and issues connected to non-functional requirements of such systems are ignored because of the abovementioned negligence. Regarding to these facts, we are going to propose a method for evaluating the architecture of machine translation systems.
In this thesis we come up with a new method which is going to evaluate the architecture of machine translation systems. In this method, the non-functional requirements of machine translation systems are going to be assessed by representing quality attributes qualitatively. What is more, by making use of our proposed method, we are going to evaluate architecture of three hybrid machine translation systems. Eventually, we will analyze our method using a framework to choose architecture evaluation methods and we will show that our proposed method is an appropriate approach to evaluate the architecture of hybrid machine translation systems.
Thesis Title:
Using Data Mining Techniques in Web Log Analysis for Producing Personalized Web Pages
Abstract:
Thesis Title:
Duplicate Record Detection in Operational Data Using Semantic Analysis
Abstract:
Duplicate record detection is main activity in information systems. Detecting approximate duplicate records is a key problem in data integration and data cleaning. The process of duplicate record detection aims at defining whether two records represent the same real word object.
Similarity function is the major element in duplicate record detection. Similarity function assigns a score to pair of data values. Most approaches concentrate on string similarity measures for comparing records. However, they fail to identify records which share semantic information. So in this study we propose a new similarity function which takes into account both semantic and string similarity.
Find the proper similarity function according to data set is key problem in duplicate record detection. In this study the new method proposed to find the most proper similarity functions for data sets.
All proposed methods are experimented on real world data sets and evaluate based on standard metrics. Experimental results indicate the new similarity function outperforms popular similarity functions in standard metrics. Based on results, proposed method in finding proper similarity function, outperform all other combination of similarity functions.
Thesis Title:
New Requirement Process Model for Critical Systems Focusing on Validation and Verification
Abstract:
Development of critical systems is very important because in these systems any incorrect behavior may lead to catastrophic loss in terms of cost, damage to the environment or even human life. To achieve this goal, requirements should be identified and specified accurately, completely and precisely. For this reason, Verification and Validation (V&V) in Requirement Engineering (RE) must be carried out in order to produce such an errorless system.
Although, some techniques have been presented in this area by some researchers, drawbacks such as using a specific approach, limitation of system’s size, high time consuming and complexity make them inappropriate in many situations. This thesis presents a requirement V&V technique in order to smooth weaknesses of current ones.
Before supposing new technique, for identifying the position of critical systems different kinds of software systems were classified from different viewpoints. A survey of current V&V techniques was conducted and subsequently they were classified in two levels. Then some of them which were useful in RE were extracted. For evaluating of V&V techniques, a framework was constructed in which a set of measureable criteria were suggested.
In suggested technique, Requirements were divided in two categories, critical and non-critical, due to interest in decreasing time consumption as well as complexity. Because of the importance of critical requirements, concentration of technique was lead toward them. Suggested technique is a combination of informal, semi-formal and formal models in which there is an efficient communication between customers and users as well as precise and accurate specification of requirements.
After presenting new technique, phases of RE process were identified and the position of new technique was specified. For investigating new technique, traffic control system was selected as a case study and technique was applied on it successfully. In order to evaluate this technique, first a comparison between suggested technique and its rivals was conducted descriptively.
Because of some ambiguities in a descriptive comparison, a qualitative comparison between suggested technique and two others, theorem proving and goal-oriented approach were carried out by applying presented framework.
Results show that suggested technique meets precise, accurate and valid requirements, and detects errors, defects and inconsistencies. Moreover, time consumption and complexity of this technique are lower than other ones and does not include their limitations. Although required technical skill is at high level, by an automation tool, this deficiency could be compensated.
Thesis Title:
New computational ontology with consideration of concept domain for semantic analysis of transaction logs
Abstract:
Nowadays, with ever growing usage of intelligent agents and the need for the knowledge representation and its reuse, ontology is being vastly applied to facilitate the understanding of knowledge. Since 1993, there have been lots of definitions with different approaches of ontology, which focus on creating a formal explicit specification in a particular domain of a shared conceptualization of a specific domain. In our Cognibase model, a new representation of ontology is presented that suggests a consistent and unique framework within which the intelligent agents can communicate. Elimination of redundant components, a better understanding of knowledge and efficient inference capabilities are among the specifications of this model. The semantic analysis system of this model receives transaction logs with different formats as its input and produces an ontology model as output. In order to maintain system integrity and due to the vast diversity of the transaction logs formats, they are preprocessed and then integrated within a “metadata” before entering the system.
A process model is presented in order to maintain the software approach which can produce the ontology model automatically. Modeling approaches are also used to illustrate the architecture of this automatic system.
The proposed ontology model is evaluated by a question answering approach according to the ontology concepts and terms, and has outperformed similar models in producing a sematic analysis system which leads to an efficient Cognibase model for presenting the outputs
Thesis Title:
A method for validation and verification of ontology based on quality engineering
Abstract:
Nowadays, regarding daily increasing growth of knowledge based systems, usage of ontologies, for sharing knowledge in knowledge based systems in increasing too. One of the issues in software’s application is its validation and verification. Despite much interest on ontology’s evaluation in recent years, its validation and verification has received little attention. One of the challenges in ontology’s validation and verification is the lack of a clear distinction between evaluation and validation and verification concepts. Another challenge is the current focus just on post development activities.
This thesis presents a framework for validating and verifying ontologies by considering correctness, completeness, accuracy and consistency measures, with respect to ontology’s life cycle and existing evaluation and validation and verification methods and quality criteria’s of the intended ontology development project. The proposed framework is presented in eight steps. These steps are of two kinds: steps which focus on ontology’s validation and verification as a part of a software system. These steps assess system’s requirements and the produced ontology’s suitability against system’s requirements. The second kind of steps focus on ontology as a human intelligible and machine interpretable knowledge representation emphasis on assessing ontology independent from the system it will be used on.
The feasibility of the proposed framework is investigated by applying it to “Tourism Guide System” as a test case. Finally a guideline for applying the proposed methodology is provided. Comparison of the proposed methodology to four other comprehensive methods, which are presented for ontology evaluation or validation and verification, shows the proposed method’s comprehensiveness in covering ontology validation and verifications goals
Thesis Title:
Service oriented architecture for cloud environment
Abstract:
With the growing use of cloud computing as an infrastructure for providing of Web-based services a clear understanding of cloud become necessary. So it is essential to consider the cloud as an implementation environment at the design time. Some of the Cloud characteristics in addition to changing services, change the requirements for the services delivery. To meet these requirements, first of all we must understanding of these requirements, then design new architecture according to the extracted requirements and potential of the cloud.
For this purpose, in the present thesis, first of all we focus on understanding of cloud computing and features that should be considered by the designer of cloud-based services, then a standard language based on UML modeling language are described to provide a development model focused on cloud as a environment. Then the following characteristics of the cloud that gives developers new opportunities are identified and according to the current requirements and opportunities created by cloud we have propose a new architecture to meet the existing requirements and enhancing the system quality. In order to provide the architecture we use a service-oriented architecture as a base architecture for the service provider and utilize the service bus as an interface component between the user and the service. Also with regard to the new features of the system some changes have been applied in order to reduce the final required time for service delivery. The proposed architecture in addition to increasing of the quality of service, providing new opportunities, such as generating services based on needs and delivering of service just by one request from the end user.
Thesis Title:
New method for verification and validation of data mining systems
Abstract:
Data mining systems discover patterns and rules and extract useful knowledge out of the data stored in databases. However, lots of the obtained patterns are spurious, obvious and redundant. Also, despite their correctness, these patterns may not be useful for a specific business and do not meet its requirements. Hence, it is of a great significance to Consider Verification and Validation activities through data mining system’s life cycle.
Verification and validation activities examine the system from various dimensions at each step of the life cycle in order to achieve early detection of errors and defects. The rigor and intensity of performing each of these activities depends on the system’s specific properties such as sensitivity, size and complexity. By focusing on this issue, in this thesis, a framework for verification and validation is proposed. This framework is customizable with respect to system’s properties and its development conditions.
A framework is one of the main techniques which is widely used in software engineering to develop software products. The application of a framework should be based on an engineering approach in order to meet quality, cost, and schedule in a software project. Regarding this matter, a framework should be applied in a structured, systematic and measurable manner. In this thesis, a new engineering perspective on software framework is proposed and important issues in this approach such as specification and representation, measurement, soundness and completeness are discussed in relation to it. For this purpose, existing and the most referenced frameworks in software engineering, which reveal the most common elements and properties of a framework, are investigated. By analyzing these elements and properties a meta-model based on UML class diagram is provided which indicates the general concepts and relationships of software framework in the proposed perspective. Regarding the importance of specification and representation in acquiring engineering perspective on software framework, this issue is analyzed after presenting the intended perspective.
Based on the presented meta-model, a verification and validation framework is proposed and according to the results of the performed analyses, its specification and representation is being taken into account. This framework is presented in a way that makes it applicable on both data mining and software systems. Eventually, the presented framework is applied on the case of verifying and validating the use of Common off the Shelf component (COTS) in component based systems as a case study.
Thesis Title:
Design and Implementation of Data Warehouse in Cloud Environment
Abstract:
Nowadays, many organizations in the business field work nonstop and full time seven days a week. This matter has caused change in the decision support paradigm: It’s necessary to make decisions according to the newest business data. So, the modern data warehouse has to be accessible constantly to reach the decision support goal (to present information to decision makers and answer the queries constantly), and also upload data frequently (refresh data warehouse quickly to cover the newest data produced in the business field).
There are a lot of infrastructures for deploying the data warehouse. Nowadays, one of the most used and addressed infrastructure is “Cloud Computing”. The reason of importance of this computational model for deploying different applications is the capabilities that presents. Accordingly, the goal of this thesis is presenting a new architecture to deploy a data warehouse with real-time capability on the cloud computing infrastructure. For this purpose, we present our work which is done in three parts. A short description of these three parts is expressed as follows.
In the first part, we start by recognizing some concepts like “data warehouse”, “cloud computing” and “real-time data warehouse”. Then, according to the available definitions and recognition that caused by this concepts, we present the considered characteristics of the data warehouse and extract the needs of an infrastructure as a base of deploying the data warehouse. Then, by evaluating cloud computing properties and the requirements of the data warehouse, we show that this environment can be an appropriate infrastructure to deploying a real-time data warehouse. Also, in the assessment part of this thesis, we compare the two infrastructures, cloud computing and data center, as two implementation infrastructure of data warehouses and express the capabilities that these two infrastructures provide for data warehouses, and we show that, according to the characteristics of these two infrastructures, cloud computing can be a more appropriate infrastructure for deploying a data warehouse.
In second part, according to our understanding of cloud computing, we extract the characteristics that the designer should consider in design time, and present them in the structure of a requirement list; and by expanding a meta-model, based on a standard modeling language (UML), we provide the possibility of application modeling to develop on cloud computing infrastructure. To show the accuracy of the proposed meta-model, we apply it in a case study with defined specification and present the deployment diagram of the case study system on cloud computing according to the proposed meta-model.
Finally, in the third part, according to the specifications and requirements of a data warehouse, and also the characteristics and capabilities that are presented by cloud computing as an infrastructure for deploying applications, we present a new architecture for deployment of data warehouse on cloud computing environment. The obvious characteristic of the presented architecture is to provide the ability of adjusting and managing the real-time factor in data warehouse administration, based on the requirements and criteria that exist in the business environment. In other words, this architecture, in addition to guaranting the qualitative requirements such as constant accessibility and constant responsibility to the queries, provides a solution between data warehouse systems that refresh by batch and in specified time periods and real-time data warehouses that constantly refresh themselves.
According to this architecture, a method for cost estimating based on qualitative criteria such as the rate of data entrance, the measure of being real-time and etc. is accessible for the users and provides the ability for them to create a balance between the costs and real-time needs.
Thesis Title:
Engineering IT Metrics in order to Justify New Technologies
Abstract:
Thesis Title:
A New Approach for Managing Big Data Security in Cloud Computing Systems
Abstract:
In this study, we worked on Big Data privacy preserving in the cloud environment. At first different definition of Big Data are studied and then based on those definitions, we presented a new definition. In our definition, the velocity of changes is added to the velocity of Big Data. According to the Big Data challenges, related tools are recognized. As these tools are used in cloud environment, the use of cloud as a platform for storing Big Data is considered.
Stored data in cloud environment is available for everyone. Each person uses specific port of cloud, which are more related to his subject to data access. Since each port of cloud has specific type of attacks, we have to secure the ways of data access. Therefore, we analyzed the security challenges in the cloud environment, and then presented a new classification of attacks in which all possible attacks in different ports of cloud environment are clear.
After analyzing these attacks, the results showed that the individual privacy is more affected. Therefore, in the following of the study of privacy preserving, we selected k-anonymity approach. Since there are various algorithms about k-anonymity, we used the optimal algorithm, which presented by Kohlmayer .et.al , and based on it, we presented proposed approach.
Thesis Title:
Intelligent systems modeling using Gaya methodology with architecture testing extention
Abstract:
In this project, we development GAIA methodology for assessing qualitative characteristics in the architecture phase development and for architectural design in the methodology , is used a Attribute-Driven Design(ADD) method that do based on a qualitative characteristics design . Gaia methodology for the reception ADD need to extends, particularly in phase Requirements Specification. ISO2196 quality model used to define requirements and it developed for Agent-based systems.we As well as added to the Gaia methodology requirements specification phase And we have defined environmental constraints because effectiveness on architectural design in this methodology. Finally, we implements case study electronic chain store using methodology developed Gaia and evaluate Its architecture with using Architecture Tradeoff analysis Method(ATAM) and compare with preliminary our methodology.
Thesis Title:
Applying Model-Driven Development into Enterprise Applications
Abstract:
Thesis Title:
A Method for Quality Engineering of Ontology across the Life Cycle
Abstract:
Thesis Title:
Abstract:
Thesis Title:
Thesis Abstract:
Thesis Title:
Quality model of recommender systems based on gamefication
Abstract:
Quality is one of the most important issues in software domain and essential factor for acceptance of technologies and solutions. So Recommender Systems (RSs) as software tools and techniques that providing suggestions and their widely use in e-commerce need to evaluate in standardized way. In other hand according to growing need to recommender systems on Internet and needing involving customers for rating them, gamification can help to improve their functionionality.
Thesis Title:
Big Data Storage and Retrieval Optimization based on a Quality Model
Thesis Abstract:
By the spread of using internet, social networks and different data sources, the importance of “Big Data” is increasing. One of the most important challenges in dealing with Big Data is “Storage and Retrieval”. So it is necessary to provide an efficient algorithm and evaluate it, for Big Data storage and retrieval optimization. In this thesis, we will select a quality model, then we will propose an algorithm to optimize big data storage and retrieval based on the selected quality model.