Alumni (MSc)

Pouya Jaferian

MSc


Thesis Title:

 

Abstract:

 

Hossein Morshedlou

MSc


Software Engineering, 2008

Thesis Title:

 Active Data Mining using Agent

Abstract:

Now a days, considering the increasing rate of data and information volume, using data mining techniques to extract the hidden information and knowledge in data, is necessary. Because of the huge volume of data and importance of most recent data in many applications, storage of this data is not economical. Furthermore the data should be processed, is active. The inherent distribution of data is another problem that we deal it. Each of the databases (or datasets) that generates or receives the data belongs to the real or legal self-interested entity. So these entities are not willing to share their knowledge freely. Considering agent and multi-agent capabilities in active and distributed environments, it seems that using their features can be useful in these environments. Up to now, most related works considered features like self-start and especially mobility of agents for data mining purposes and other features and capabilities (like as intelligence, learning, goal-oriented and social capabilities) are not considered. I this thesis, in addition to reviewing the related work and researches in agent-based data mining area, we intend to consider the problem of data stream classification in active environments. We study this problem in two phases. First we mention the features and capabilities of agents for data mining task without considering their social capabilities. The second phase considers using social capabilities such as negotiation, reaching agreement … for data mining in active and distributed environments. Totally the contributions of this thesis are: 1) Presentation of agent-based approach for classification of data streams with concept drift using goal-oriented, intelligence, learning, reasoning features of agents. 2) Presentation of multi agent based approach for classification of distributed data stream in a competitional environment by using social capabilities of agents. Results of conducted experiments in this thesis, shows the superiority of agent-based data mining and classification in active and distributed environments.

Pouya Jaferian

MSc


Thesis Title:

 

Abstract:

 

Hossein Morshedlou

MSc


Software Engineering, 2008

Thesis Title:

 Active Data Mining using Agent

Abstract:

Now a days, considering the increasing rate of data and information volume, using data mining techniques to extract the hidden information and knowledge in data, is necessary. Because of the huge volume of data and importance of most recent data in many applications, storage of this data is not economical. Furthermore the data should be processed, is active. The inherent distribution of data is another problem that we deal it. Each of the databases (or datasets) that generates or receives the data belongs to the real or legal self-interested entity. So these entities are not willing to share their knowledge freely. Considering agent and multi-agent capabilities in active and distributed environments, it seems that using their features can be useful in these environments. Up to now, most related works considered features like self-start and especially mobility of agents for data mining purposes and other features and capabilities (like as intelligence, learning, goal-oriented and social capabilities) are not considered. I this thesis, in addition to reviewing the related work and researches in agent-based data mining area, we intend to consider the problem of data stream classification in active environments. We study this problem in two phases. First we mention the features and capabilities of agents for data mining task without considering their social capabilities. The second phase considers using social capabilities such as negotiation, reaching agreement … for data mining in active and distributed environments. Totally the contributions of this thesis are: 1) Presentation of agent-based approach for classification of data streams with concept drift using goal-oriented, intelligence, learning, reasoning features of agents. 2) Presentation of multi agent based approach for classification of distributed data stream in a competitional environment by using social capabilities of agents. Results of conducted experiments in this thesis, shows the superiority of agent-based data mining and classification in active and distributed environments.

Masoumeh Kheirkhahzadeh

MSc


Software Engineering, February 2008

Thesis Title:

Presenting a Search Algorithm Based on Population-based  Methods in Combinatorial Optimization

Abstract:

Ant Colony Optimization (ACO) is a metaheuristic method that inspired by the behavior of real ant colonies. In this paper, we propose a hybrid ACO algorithm for solving vehicle routing problem (VRP) heuristically in combination with an exact  Algorithm to improve both the performance of the algorithm and the quality of solutions. In the basic VRP, geographically scattered customers of known demand are supplied from a single depot by a fleet of identically capacitated vehicles which are subject to architecture weight limit and, in some cases, to a limit on the distance traveled. Only one vehicle is allowed to supply each customer. The objective is to design least cost routes for the vehicles to service the customers.

The intuition of the proposed algorithm is that nodes which are near to each other will probably belong to the same branch of the minimum spanning tree of the problem graph and thus will probably belong to the same route in VRP. In the proposed algorithm, in each iteration, we first apply a modified implementation of Prim’s algorithm to the graph of the problem to obtain a feasible minimum spanning tree (MST) solution. Given a clustering of client nodes, the solution is to find a route in these clusters by using ACO with a modified version of transition rule of the ants. At the end of each iteration, ACO tries to improve the quality of solutions by using a local search algorithm, and update the associated weights of the graph arcs.

Shohreh Kazemi

MSc


Thesis Title:

 

Abstract:

Fariad Molazem

MSc


Thesis Title:

 

Abstract:

 

Milad Khalilian

MSc


Software Engineering, 2008

Thesis Title:

Cooperative Mobile Agents Recovery in Execution Time by Improvement of Message Complexity

Abstract:

Mobile agent is a promising issue in distributed problem solving. Developing real systems based on this technology demands a reliable platform which could tolerate a number of failures. Also cooperation is a key point in multi agent systems and many pr oblems can be solved easily using cooperative agents. In this research, we present a comprehensive approach for fault tolerance in cooperative mobile agent systems based on rollback recovery methods and consensus. Whereas rollback recovery guarantees consistency of a group of cooperative agents by taking checkpoints, using consensus assures that each agent operates only one time thorough the execution even in absence of a reliable failure detection module. In our method, replication is applied to backup agent (called supervisor), so the performance improves substantially. Also saving checkpoints without taking advantage of stable storage, is an important feature for designing systems based on mobile agents. In evaluation, the message complexity of system is computed. Moreover, the effect of different factors on execution time is studied using simulation.

Meysam Ghaderyan

MSc


2008

Thesis Title:

Improving Website User Model Automatically Using Semantics with Domain Specific Concepts

Abstract:

Information overload is a major problem in the current World Wide Web. To tackle this problem, web personalization systems have been proposed that adapt the contents and services of a website to individual users according to their interests and navigational behaviors. A major component in any web personalization system is its user model. The content of the pages in a website can be utilized in order to create a more precise user model, but keyword based approaches lack a deep insight of the website. Recently a number of researches have been done to incorporate semantics of a website in representation of its users. All of these efforts use either a specific manually constructed taxonomy or ontology or a general purpose one like WordNet to map page views into semantic elements. However, building a hierarchy of concepts manually is time consuming and expensive. On the other hand, general purpose resources suffer from low coverage of domain specific terms. In this thesis we intend to address both these shortcomings. Our main contribution is that we introduce a mechanism to automatically improve the representation of the user in the website using a comprehensive lexical semantic resource. We utilize Wikipedia, the largest encyclopedia to date, as a rich lexical resource to enhance the automatic construction of vector model representation of user interests. The proposed architecture consists of a number of components namely basic log preprocessing, website domain concept extraction, website keyword extraction, keyword vector builder and keyword to concept mapping. Another important contribution is using the structure of the website to automatically narrow down domain specific concepts. Finally the last contribution is a new keyword to concept mapping method. Our evaluations show that the proposed method along with its comprehensive lexical resource represents users more effectively than keyword based approaches and WordNet based approaches.

Ali Sebti

MSc


Artificial Intelligence, 2009

Thesis Title:

Hybrid Method to Improve Text Summarization

Abstract:

With the growing world wide access to information and creating websites and online text resources, find and study the required information, has been attended specially. Automatic text summarization is one solution that user can do Overview of all related documents subject to the requirements and thus helps to next decisions. In this regard, particular types of summary can provide facilities to the user. In this thesis extractive summary means extracting text as a summary – including here sentences – about the study and has improved. Due to too much information available, fast and statistical methods and ways that require less semantic analysis can address the needs. The first systems designed using the methods that obtain priority of words by the distribution of words repeated is named tf*idf score and then sentences score is obtained by total word scores. Other methods, summarization is based on cohesion which each text content is a graph representation. According to the graph obtained can be used to extract the important key nodes, with many types of graph algorithms available. Previous methods for determining the similarity of the two sentences used overlapping words of two sentences.
In this thesis, using similarity based on WordNet, similarity of two words is considered as a kind of partial overlap. The new criteria for similar words were presented based on a new method to calculate the information content leads to improvement in similarity of words. In another proposed method using the combinations of words co-occurrence with different orders, leading to new criteria for defining the significance of sentences that is a kind of idf score. Combination of these co-occurrence words are based on the principle of inclusion and inclusion in probabilistic mathematics. Then proposed methods compared with similar summarization systems in research’s, that the results showed improvement. The implemented platform for summarization, present an appropriate data structure designed for text, which have a capacity to add new algorithms and evaluate them in the least time. Suggestions for future research, including extended tf parameter by using discussed concepts is recommended. In addition we can use structure of Wikipedia rather than WordNet and using the expression similarity rather than word similarity.

Mozhdeh Qeraati

MSc


Thesis Title:

 

Abstract:

 

Farnoush Golshan

MSc


E-Commerce, February 2009

Thesis Title:

A New Approach for Tracing Quality Attributes in Service Oriented Architecture

Abstract:

The dynamic nature of service-oriented architecture (SOA) makes it different from the other software architectures. A SOA system consists of a number of independent distributed services, which can be disjointed or replaces with a better choice in the core software dynamically at runtime. In other word, the architecture of such softwares changes continuously at runtime. While the links between our main software and other services are created and destroyed at runtime, many of quality attributes may change noticeably.

Based on this dynamic behavior, quality issues become more important and complicated in service-oriented softwares, and need a different approach based on the special characteristics of this type of architecture. From the multiple researches that have been carried out in the field of Quality in SOA, the approach of most of them is around Quality of Service (QoS). Beside on many important issues in this field, like quality of service, it seems that the overall quality of a software which consists of multiple independent services has a great importance.

The overall or general quality of a service-oriented architecture is a consequent of the qualities of the services and components that constitute the architecture, each of them is built by different vendors and with different qualities. On the other hand, the presence of these services and components in the system is not permanent and they may disconnect from the architecture when required or replaced by a better choice at runtime. Therefore, the quality of a service-oriented architecture will have much turbulence at runtime.

In this thesis, a new method is introduced for tracing the changes that may occur in quality attributes of  a service-oriented software at runtime, by means of a formal model called graph transformation systems (GTS). Using this method, it can be possible to recalculate the software overall quality, as the architecture alters at runtime connecting to a service or disconnecting from it. This method can be used in SOA quality management and change management. On the other hand, it can be useful in the service selection process, as a method to predict the quality state of service-oriented softwares

Elnaz Delpisheh

MSc


Software Engineering, August 2009

Thesis Title:

Analysis and design of the evaluating process of data mining system architectures

Abstract:

Nowadays, data mining systems are widely used to discover knowledge from large distributed repositories. These systems require specific functionalities as well as quality attributes different from traditional software systems. Some of these requirements are extensibility, integrity, supporting high dimensional data, flexibility, privacy preserving, distributability, customizability, Transparency, Supporting large amount of data, Fault tolerance, and portability. Fulfilling these quality attributes relies vitally upon developing a well-suited architecture and a method to evaluate it. Consequently, the more successful we are in pursuing the requirements, the less cost we pay to repair and maintain it.

In this thesis, we propose a method to evaluate data mining systems’ architecture. We extracted the criteria needed to evaluate data mining systems’ architecture, improving and adapting the best software architecture evaluatation method (ATAM). To implement our method, called DMATAM, we broadly used modeling and measurement tools as well as data mining system criteria. Furthermore, we analyzed our method using a framework to choose evaluation architecture methods. Consequently, our proposed method was chosen as an appropriate method to evaluate data mining system architecture.

Besat Zardosht

MSc


Software Engineering, January 2010

Thesis Title:

Automatic Evaluation of Machine Translation, Enhancement of N-gram Based Methods

Abstract:

Since machine translation became a widespread technology, the evaluation of machine translation is so critical. Human evaluation of machine translation is expensive and time-consuming. Automatic evaluation metrics can be a good substitute as they are fast and cheap.

Considerable number of automatic machine translation evaluation has developed since 1990s. However not all of them are practical. In 21st century some successful and practical methods appeared.

Most of the machine translation evaluation metrics are based on string similarities. However some of them use machine learning approach.

Bleu is one of the most popular metrics for machine translation evaluation. This thesis extends Bleu by assigning proper weights to N-grams. A word’s part of sentence which can be obtained via its pars tree is used to calculate the weight. The arithmetic mean and harmonic mean of pars tree components are adopted to estimate the weight of an N-gram. Like Bleu method, this method is language independent and simple. This thesis addresses several weaknesses of Bleu method and brings up adequate solutions for some of them to propose a method which correlates better with human judgment

Experimental results indicate that this method brings a higher correlation with human judgments than original Bleu.

Elham Moazzen

MSc


Software Engineering, March 2010

Thesis Title:

Using Aspect-Oriented Approach in Modeling and Evaluation of Non-functional Requirements in Web-based Systems Design

Abstract:

Today, web-based systems provide various functionality and content for a wide variety of end users. Considering our increasing dependence on web applications and their intricate features and complex functional requirements, quality of such systems has become a critical factor for their success. Hence, there is now legitimate and growing concern about the manner in which web-based systems are developed and their long-term quality and integrity. In order to be successful, a web application must be systematically developed in terms of both its functional and non-functional requirements. Web engineering, an emerging new discipline, is the establishment and use of sound scientific, engineering and management principles and disciplined and systematic approaches to the successful development, deployment and maintenance of high quality Web-based systems and applications. In this research, by adopting and examining an industrial case study, we found that conventional web engineering approaches are merely driven by their functional requirements and decisions about acquiring the quality of these functional concerns are not explicitly and systematically made until the implementation and maintenance phases. We found that the realization of the infrastructure mechanisms for fulfilling nonfunctional requirements becomes tangled and scattered in functional modules. Therefore, as the system grows, its maintenance efforts dramatically increase. We extracted the crosscutting pattern of these non-functional requirements and used the concept of Aspect, which was first introduced in Aspect-Oriented Programming, in order to explicitly model and modularize them. Then we improved the development process of our industrial case study by aspectual injection of non-functional requirements to its analysis, design and implementation stages. Then, in order to describe the proposed software architecture, we surveyed well-known Aspect-Oriented Architecture Description Languages (AOADLs). Considering the result of this study, we extended UML 2.0 in order to describe non-functional concerns as architectural aspects. Then we tested our approach on Security and Performance as our candidate quality attributes in the case study. As long as the main contribution of the aspect-oriented approach is the improvement of modularity, we expected the proposed architecture to have less performance cost. In this regard, we evaluated our approach using Aspectual Software Architecture Analysis Method (ASAAM) and reported the result in the dissertation.

Elham Abd Nikooie Pour

MSc


Software Engineering, February 2011

Thesis Title:

Agent based human computer interaction for blind persons

Abstract:

User satisfaction is an important problem in user interface design. The most available user interfaces are not proper for disabled peoples. Blind users can not simply access information like usual peoples through user interfaces. As an example usual users with a glance can differ between several information on the user interface according the positions, colors, and styles of these information, but a blind person who uses screen reader cannot see these visual features. The advertising links and additional information make these users confused. Therefore designing intelligent user interface that are autonomous, goal directed, dynamic and user interest based seems essential. These intelligent UI can increase task’s speed and decrease waste times. In this thesis, several interviews with blind users have done and their needs, tools, problems and requirements during web access discovered. Finally according these requirements an intelligent personalized search system designed and implemented. This system without any user’s effort implicitly learns user’s interest and represents them in a user profile.

Mainly the project outcomes are: 1) analyses an agent based approach for search personalization 2) create a framework for comparing available systems in this field 3) survey systems designed for blind persons.

Sima Salmani

MSc


Software Engineering, 2011

Thesis Title:

Dynamic Ontology for Web Personalization

Abstract:

Ontology is used to provide semantic knowledge, create a common conceptualization, and provide compatibility, disambiguation, aggregate views and common terms for both sides of a relation. With these goals, it is used in a vast variety of areas such as the Semantic Web, E-commerce, Natural Language Processing, and Knowledge Engineering and so on. One important usage of ontology is in web personalization. Web personalization is the process of customizing a website to the needs of specific users. It utilizes a combination of domain knowledge and user model for this reason. Ontology can be used either for representing domain knowledge or constructing a user model which represents user preferen ces. In this thesis,
user preferences are extracted from web server logs. And for modeling them, a combination of overlay approaches and approaches based on lexical ontologies are utilized. Thus, according to the overlay approach, domain ontology is constructed first, and then the user models are constructed as a weighted mask of the domain ontology. We use Wikipedia as a lexical ontology for constructing the domain ontology.
In addition, since the user’s preferences change over time, ontological user model should be dynamic, otherwise recommendations constructed by recommendation engine, are unable in an efficient recommending. Thus we update the ontological user models at determined time intervals.

Sajjad Zare

MSc


Software Engineering, January 2011

Thesis Title:

Mapping Business Process Modeling to Formal Model

Abstract:

Service Oriented Architecture (SOA) is a new paradigm for distributed computing which utilizes services to support the rapid, agile, and low-cost development of distributed applications. Web Services have been accepted as the best way to implement SOA. Several languages exist for defining new and more complex services or business processes which are implemented for example by means of web service composition.  One of the most visible standards of existing approaches to build service composition in form of business process is Business Process Execution Language (BPEL).

BPEL is used to describe execution logic of Web services applications by defining their control flows and providing a way for partner services to share a common context. The ability to analysis functional properties and quality (such as reliability) prediction are two issues in a composite web service. Most of the approaches for specifying web service composition in form of business processes suffer of mentioned issues which is important for designers.

This work proposes an approach to predict the reliability of web service composition built on BPEL. The proposed approach first transforms business process specification into an appropriate modeling language, namely Petri net. It has been shown that Petri nets provide an appropriate foundation for performing static verification. We also predict the reliability of WS-BPEL process using Petri net model. The proposed method is applied in loan approval service as a case study.

Esmaeel Rezaee

MSc


Thesis Title:

 

Abstract:

 

Meisam Nazariani

MSc


Thesis Title:

 

Abstract:

 

Ani Megerdoumian

MSc


Software Engineering, January 2012

Thesis Title:

Evaluation of Machine Translation Systems Architecture to Improve Hybrid Architecture

Abstract:

Less attention has been taken to intelligent systems and especially machine translation applications in terms of the production process which is based on software engineering. Machine translation is a kind of natural language processing application, in which a text is being taken as an input from a source language and a text equivalent to the destination language is produced. Machine translation is an open task, which means, we can produce many valid translations from different combinations of words. Because of costs and resource expenses, it is better to evaluate the structural quality of machine translation systems during analysis and design phase. Current machine translation evaluation techniques’ attention is much focused on the quality of produced sentences rather than the structure of the system. Quality attributes and issues connected to non-functional requirements of such systems are ignored because of the abovementioned negligence. Regarding to these facts, we are going to propose a method for evaluating the architecture of machine translation systems.

In this thesis we come up with a new method which is going to evaluate the architecture of machine translation systems. In this method, the non-functional requirements of machine translation systems are going to be assessed by representing quality attributes qualitatively. What is more, by making use of our proposed method, we are going to evaluate architecture of three hybrid machine translation systems. Eventually, we will analyze our method using a framework to choose architecture evaluation methods and we will show that our proposed method is an appropriate approach to evaluate the architecture of hybrid machine translation systems.

Fatemeh Jabbari

MSc


Software Engineering, 2011

Thesis Title:

Using Data Mining Techniques in Web Log Analysis for Producing Personalized Web Pages

Abstract:

Web Mining is the application of Data Mining techniques in World Wide Web to automatically extract knowledge from Web data. There are three main branches in Web mining with respect to the data being mined: Web Content Mining, Web Structure Mining and Web Usage Mining. The purpose of Web usage mining is to automatically extract knowledge from web usage data in a website or a specific Web domain. In Web usage mining, it is aimed to extract meaningful knowledge from user’s navigational behavior in the website, analyzing the Web log files on Web servers. The extracted knowledge can be used in different applications such as Web Personalization. Recently, using Web usage mining in Web personalization has become common as an alternative for classic methods. This Project aims to use data mining techniques in Web log analysis in a way that improves the efficiency of Web personalization. This efficiency is measured by precision and coverage of the personalized system. For this aim, we have chosen sequential pattern mining techniques and have tried to improve algorithms considering web log data qualities. An important challenge in sequential pattern mining algorithms is that they ignore the different nature of items in mining process. The proposed method tries to take into account the different occurrences of the same item in different sessions, by adding weight to item support counts. The proposed method has been tested in a personalization system. The results show improvement in precision and coverage of personalization.

Ali Abdoli

MSc


Artificial Intelligence, February 2012

Thesis Title:

Duplicate Record Detection in Operational Data Using Semantic Analysis

Abstract:

Duplicate record detection is main activity in information systems. Detecting approximate duplicate records is a key problem in data integration and data cleaning. The process of duplicate record detection aims at defining whether two records represent the same real word object.

Similarity function is the major element in duplicate record detection. Similarity function assigns a score to pair of data values. Most approaches concentrate on string similarity measures for comparing records. However, they fail to identify records which share semantic information. So in this study we propose a new similarity function which takes into account both semantic and string similarity.

Find the proper similarity function according to data set is key problem in duplicate record detection. In this study the new method proposed to find the most proper similarity functions for data sets.

All proposed methods are experimented on real world data sets and evaluate based on standard metrics. Experimental results indicate the new similarity function outperforms popular similarity functions in standard metrics. Based on results, proposed method in finding proper similarity function, outperform all other combination of similarity functions.

Rezvan Shiravi

MSc


Software Engineering, October 2012

Thesis Title:

New Requirement Process Model for Critical Systems Focusing on Validation and Verification

Abstract:

Development of critical systems is very important because in these systems any incorrect behavior may lead to catastrophic loss in terms of cost, damage to the environment or even human life. To achieve this goal, requirements should be identified and specified accurately, completely and precisely. For this reason, Verification and Validation (V&V) in Requirement Engineering (RE) must be carried out in order to produce such an errorless system.

Although, some techniques have been presented in this area by some researchers, drawbacks such as using a specific approach, limitation of system’s size, high time consuming and complexity make them inappropriate in many situations. This thesis presents a requirement V&V technique in order to smooth weaknesses of current ones.

Before supposing new technique, for identifying the position of critical systems different kinds of software systems were classified from different viewpoints. A survey of current V&V techniques was conducted and subsequently they were classified in two levels. Then some of them which were useful in RE were extracted. For evaluating of V&V techniques, a framework was constructed in which a set of measureable criteria were suggested.

In suggested technique, Requirements were divided in two categories, critical and non-critical, due to interest in decreasing time consumption as well as complexity. Because of the importance of critical requirements, concentration of technique was lead toward them. Suggested technique is a combination of informal, semi-formal and formal models in which there is an efficient communication between customers and users as well as precise and accurate specification of requirements.

After presenting new technique, phases of RE process were identified and the position of new technique was specified. For investigating new technique, traffic control system was selected as a case study and technique was applied on it successfully. In order to evaluate this technique, first a comparison between suggested technique and its rivals was conducted descriptively.

Because of some ambiguities in a descriptive comparison, a qualitative comparison between suggested technique and two others, theorem proving and goal-oriented approach were carried out by applying presented framework.

Results show that suggested technique meets precise, accurate and valid requirements, and detects errors, defects and inconsistencies. Moreover, time consumption and complexity of this technique are lower than other ones and does not include their limitations. Although required technical skill is at high level, by an automation tool, this deficiency could be compensated.

Mahdieh Monzavi

MSc


Software Engineering, October 2012

Thesis Title:

New computational ontology with consideration of concept domain for semantic analysis of transaction logs

Abstract:

Nowadays, with ever growing usage of intelligent agents and the need for the knowledge representation and its reuse, ontology is being vastly applied to facilitate the understanding of knowledge.  Since 1993, there have been lots of definitions with different approaches of ontology, which focus on creating a formal explicit specification in a particular domain of a shared conceptualization of a specific domain. In our Cognibase model, a new representation of ontology is presented that suggests a consistent and unique framework within which the intelligent agents can communicate. Elimination of redundant components, a better understanding of knowledge and efficient inference capabilities are among the specifications of this model. The semantic analysis system of this model receives transaction logs with different formats as its input and produces an ontology model as output. In order to maintain system integrity and due to the vast diversity of the transaction logs formats, they are preprocessed and then integrated within a “metadata” before entering the system.

A process model is presented in order to maintain the software approach which can produce the ontology model automatically. Modeling approaches are also used to illustrate the architecture of this automatic system.

The proposed ontology model is evaluated by a question answering approach according to the ontology concepts and terms, and has outperformed similar models in producing a sematic analysis system which leads to an efficient Cognibase model for presenting the outputs

Masoumeh Nourollahi

MSc


Software Engineering, January 2014

Thesis Title:

A method for validation and verification of ontology based on quality engineering

Abstract:

Nowadays, regarding daily increasing growth of knowledge based systems, usage of ontologies, for sharing knowledge in knowledge based systems in increasing too. One of the issues in software’s application is its validation and verification.  Despite much interest on ontology’s evaluation in recent years, its validation and verification has received little attention. One of the challenges in ontology’s validation and verification is the lack of a clear distinction between evaluation and validation and verification concepts. Another challenge is the current focus just on post development activities.

This thesis presents a framework for validating and verifying ontologies by considering correctness, completeness, accuracy and consistency measures, with respect to ontology’s life cycle and existing evaluation and validation and verification methods and quality criteria’s of the intended ontology development project. The proposed framework is presented in eight steps. These steps are of two kinds: steps which focus on ontology’s validation and verification as a part of a software system. These steps assess system’s requirements and the produced ontology’s suitability against system’s requirements. The second kind of steps focus on ontology as a human intelligible and machine interpretable knowledge representation emphasis on assessing ontology independent from the system it will be used on.

The feasibility of the proposed framework is investigated by applying it to “Tourism Guide System” as a test case. Finally a guideline for applying the proposed methodology is provided. Comparison of the proposed methodology to four other comprehensive methods, which are presented for ontology evaluation or validation and verification, shows the proposed method’s comprehensiveness in covering ontology validation and verifications goals

Ali Kamali

MSc


Software Engineering, September 2014

Thesis Title:

Service oriented architecture for cloud environment

Abstract:

With the growing use of cloud computing as an infrastructure for providing of Web-based services a clear understanding of cloud become necessary. So it is essential to consider the cloud as an implementation environment at the design time. Some of the Cloud characteristics in addition to changing services, change the requirements for the services delivery. To meet these requirements, first of all we must understanding of these requirements, then design new architecture according to the extracted requirements and potential of the cloud.

For this purpose, in the present thesis, first of all we focus on understanding of cloud computing and features that should be considered by the designer of cloud-based services, then a standard language based on UML modeling language are described to provide a development model focused on cloud as a environment. Then the following characteristics of the cloud that gives developers new opportunities are identified and according to the current requirements and opportunities created by cloud we have propose a new architecture to meet the existing requirements and enhancing the system quality. In order to provide the architecture we use a service-oriented architecture as a base architecture for the service provider and utilize the service bus as an interface component between the user and the service. Also with regard to the new features of the system some changes have been applied in order to reduce the final required time for service delivery. The proposed architecture in addition to increasing of the quality of service, providing new opportunities, such as generating services based on needs and delivering of service just by one request from the end user.

Malihe Hashemi

MSc


Software Engineering, September 2014

Thesis Title:

New method for verification and validation of data mining systems

Abstract:

Data mining systems discover patterns and rules and extract useful knowledge out of the data stored in databases. However, lots of the obtained patterns are spurious, obvious and redundant. Also, despite their correctness, these patterns may not be useful for a specific business and do not meet its requirements. Hence, it is of a great significance to Consider Verification and Validation activities through data mining system’s life cycle.

Verification and validation activities examine the system from various dimensions at each step of the life cycle in order to achieve early detection of errors and defects. The rigor and intensity of performing each of these activities depends on the system’s specific properties such as sensitivity, size and complexity. By focusing on this issue, in this thesis, a framework for verification and validation is proposed. This framework is customizable with respect to system’s properties and its development conditions.

A framework is one of the main techniques which is widely used in software engineering to develop software products. The application of a framework should be based on an engineering approach in order to meet quality, cost, and schedule in a software project. Regarding this matter, a framework should be applied in a structured, systematic and measurable manner. In this thesis, a new engineering perspective on software framework is proposed and important issues in this approach such as specification and representation, measurement, soundness and completeness are discussed in relation to it. For this purpose, existing and the most referenced frameworks in software engineering, which reveal the most common elements and properties of a framework, are investigated. By analyzing these elements and properties a meta-model based on UML class diagram is provided which indicates the general concepts and relationships of software framework in the proposed perspective. Regarding the importance of specification and representation in acquiring engineering perspective on software framework, this issue is analyzed after presenting the intended perspective.

Based on the presented meta-model, a verification and validation framework is proposed and according to the results of the performed analyses, its specification and representation is being taken into account. This framework is presented in a way that makes it applicable on both data mining and software systems. Eventually, the presented framework is applied on the case of verifying and validating the use of Common off the Shelf component (COTS) in component based systems as a case study.

Soheil Mohammadi

MSc


Software Engineering, February 2015

Thesis Title:

Design and Implementation of Data Warehouse in Cloud Environment

Abstract:

Nowadays, many organizations in the business field work nonstop and full time seven days a week. This matter has caused change in the decision support paradigm: It’s necessary to make decisions according to the newest business data. So, the modern data warehouse has to be accessible constantly to reach the decision support goal (to present information to decision makers and answer the queries constantly), and also upload data frequently (refresh data warehouse quickly to cover the newest data produced in the business field).

There are a lot of infrastructures for deploying the data warehouse. Nowadays, one of the most used and addressed infrastructure is “Cloud Computing”. The reason of importance of this computational model for deploying different applications is the capabilities that presents. Accordingly, the goal of this thesis is presenting a new architecture to deploy a data warehouse with real-time capability on the cloud computing infrastructure. For this purpose, we present our work which is done in three parts. A short description of these three parts is expressed as follows.

In the first part, we start by recognizing some concepts like “data warehouse”, “cloud computing” and “real-time data warehouse”. Then, according to the available definitions and recognition that caused by this concepts, we present the considered characteristics of the data warehouse and extract the needs of an infrastructure as a base of deploying the data warehouse. Then, by evaluating cloud computing properties and the requirements of the data warehouse, we show that this environment can be an appropriate infrastructure to deploying a real-time data warehouse. Also, in the assessment part of this thesis, we compare the two infrastructures, cloud computing and data center, as two implementation infrastructure of data warehouses and express the capabilities that these two infrastructures provide for data warehouses, and we show that, according to the characteristics of these two infrastructures, cloud computing can be a more appropriate infrastructure for deploying a data warehouse.

In second part, according to our understanding of cloud computing, we extract the characteristics that the designer should consider in design time, and present them in the structure of a requirement list; and by expanding a meta-model, based on a standard modeling language (UML), we provide the possibility of application modeling to develop on cloud computing infrastructure. To show the accuracy of the proposed meta-model, we apply it in a case study with defined specification and present the deployment diagram of the case study system on cloud computing according to the proposed meta-model.

Finally, in the third part, according to the specifications and requirements of a data warehouse, and also the characteristics and capabilities that are presented by cloud computing as an infrastructure for deploying applications, we present a new architecture for deployment of data warehouse on cloud computing environment. The obvious characteristic of the presented architecture is to provide the ability of adjusting and managing the real-time factor in data warehouse administration, based on the requirements and criteria that exist in the business environment. In other words, this architecture, in addition to guaranting the qualitative requirements such as constant accessibility and constant responsibility to the queries, provides a solution between data warehouse systems that refresh by batch and in specified time periods and real-time data warehouses that constantly refresh themselves.

According to this architecture, a method for cost estimating based on qualitative criteria such as the rate of data entrance, the measure of being real-time and etc. is accessible for the users and provides the ability for them to create a balance between the costs and real-time needs.

Davood Bahadori

MSc


Thesis Title:

 

Abstract:

 

Sahar Dehghan

MSc


E-Commerce, January 2016

Thesis Title:

Engineering IT Metrics in order to Justify New Technologies

Abstract:

The advantages of information and communication technology lead organizations to adopt new technologies in this field. New technologies adoption without considering organizational readiness, causes waste of valuable resources and failure to obtain high profits in organization. In order to reduce risk and maximize profits, before any operational action and planning, it is essential for the new technology adoption to have a good understanding of the current situation of the company or organization to understand its shortcomings and weaknesses. Using quantified measurement is a good solution in order to attain accurate and clear results. Cloud computing is one of the new technologies in the field of information and communication technology. In this project, organizational readiness is measured quantitatively. At first, influencing factors on cloud computing adoption are identified and classified in three contexts; environmental, organizational and technological. Then measuring metrics for influencing factors on cloud computing adoption are determined. Influencing factors and evaluated measuring metrics can be present in form of a measurment framework. In this collection of information, also there are some shortcomings, through a deeper study can compensate them and provide a comprehensive framework for the measurement. Finally, two measurment methods for organizational readiness about cloud computing adoption are presented. The First method is in figure of merit. In order to increase measurment accuracy, the second method that is a combination of figure of merit and fuzzy inference methods is presented. If measurement of organizational readiness is more accurate, gap between the current status of organization and ideal status for cloud computing adoption is more accurate. As a result, decisions about cloud computing adoption in the organization is easier and more accurate and can be determined that an organization is prepared to adopt cloud computing or to promote recent facilities. Identified factors and measuring methods in this project can be used for other new technologies adoption in organization.

Zohre Rezaei Kinji

MSc


Software Engineering, February 2016

Thesis Title:

A New Approach for Managing Big Data Security in Cloud Computing Systems

Abstract:

In this study, we worked on Big Data privacy preserving in the cloud environment. At first different definition of Big Data are studied and then based on those definitions, we presented a new definition. In our definition, the velocity of changes is added to the velocity of Big Data. According to the Big Data challenges, related tools are recognized. As these tools are used in cloud environment, the use of cloud as a platform for storing Big Data is considered.

Stored data in cloud environment is available for everyone. Each person uses specific port of cloud, which are more related to his subject to data access. Since each port of cloud has specific type of attacks, we have to secure the ways of data access. Therefore, we analyzed the security challenges in the cloud environment, and then presented a new classification of attacks in which all possible attacks in different ports of cloud environment are clear.

After analyzing these attacks, the results showed that the individual privacy is more affected. Therefore, in the following of the study of privacy preserving, we selected k-anonymity approach. Since there are various algorithms about k-anonymity, we used the optimal algorithm, which presented by Kohlmayer .et.al , and based on it, we presented proposed approach.

Akram Rahimi

MSc


Software Engineering, September 2016

Thesis Title:

Intelligent systems modeling using Gaya methodology with architecture testing extention

Abstract:

Today, with more complex requirements, deployment agents in the design of software systems has increased significantly. In this regard, agent-based Software Engineering was formed to meet those requirements because they could not object-based methodologies specific operating characteristics, such as autonomy, Being proactive support and etc. But as agent-based software engineering approach is a new approach, methodologies proposed in this area has shortcomings and weaknesses of the various aspects of their development process. The studies have shown that most of these methodologies do not support or test phase or have been treated poorly. On the other hand in software engineering, architecture phase is the best place to achieve qualitative demands the software system and evaluate the qualitative characteristics in the architecture phase,is early assessment that prevented the costs due from defects and failures the project in end work.
In this project, we development GAIA methodology for assessing qualitative characteristics in the architecture phase development and for architectural design in the methodology , is used a Attribute-Driven Design(ADD) method that do based on a qualitative characteristics design . Gaia methodology for the reception ADD need to extends, particularly in phase Requirements Specification. ISO2196 quality model used to define requirements and it developed for Agent-based systems.we As well as added to the Gaia methodology requirements specification phase And we have defined environmental constraints because effectiveness on architectural design in this methodology. Finally, we implements case study electronic chain store using methodology developed Gaia and evaluate Its architecture with using Architecture Tradeoff analysis Method(ATAM) and compare with preliminary our methodology.

Mohammad Ghaemi Fard

MSc


Software Engineering, October 2016

Thesis Title:

Applying Model-Driven Development into Enterprise Applications

Abstract:

Developing enterprise application software should be based on new technologies. The legacy applications are even migrating to new technologies. To develop enterprise software, an enormous amount of time and cost have to be spent. In addition, expert developers which are familiar to new technologies are rare. Therefore companies and organizations usually decide to use product-ready software. On the other hand, software development companies are trying to release their services as off-the-shelf components. During this process, the most important issue is applying changes to software products. To achieve that software companies have to develop flexible software components with high cooperation levels and good documents. In this thesis, by investigation on technologies of enterprise software development, we will reach to a model which is transferable to codes of different technologies. This enables developers to work with tasks assimilation method which affects the phases of software development. From enterprise application analysis to implementing the product, developers mostly work with models. By applying our approach to enterprise application development, while applying changes to the software and technology migrating will become easy, number of domain-expert users, time and costs of software development will be reduced.  To validate proposed approach, we take a broker system as the test-bed for applying our models and observing the outcomes.

Mina Ardakani

MSc


Software Engineering, February 2017

Thesis Title:

A Method for Quality Engineering of Ontology across the Life Cycle

Abstract:

Nowadays, using of ontology in intelligent systems as a way to represent and share knowledge of a specific domain is increasing. Ontology like other engineering products needs a method for quality assessment and evaluation during its life cycle. Measurement is the key element of engineering processes that provides systematic evaluation of quality. This is also true in quality engineering of ontology. Recently, quality assessment of ontology has been the topic of many research activities. Many works, focus on structural quality factor regardless of semantic and implicit knowledge of ontology. Another challenge is the quality assessment of the ontology construction process has received less attention than the quality of final product.
The aim of this thesis is to present a method in order to solve the outlined challenges in previous researches. So, we focus on three quality factors of ontology including functional adequacy, structural and maintainability. Then a plan is presented to measure these quality factors in four steps. So that each step of that is associated with one of the stages of the general life cycle of ontology. At each step the requirements of each stage of the life cycle and a quality model to measure them are presented. Knowledge Cartography (KC) as a new reasoning technique is used to represent conceptualization requirements and measure structural quality of ontology. The main feature of KC is defining new metrics, which take into consideration implicit knowledge and meaning of description logic axioms. The results of evaluation indicated that the proposed method compared to related works has been considered more analytical details and also new proposed structural metrics for other semantic aspects of ontology.

Ronak Bekri

MSc


Software Engineering

Thesis Title:

Abstract:

With the rapid development of Information Technology, huge amount of data is produced all the time. Everything and everyone is generating data. Data comes from wide range of data sources; human generated data, organization generated data and machine generated data. The growing of large volume of a wide variety of data with high velocity indicates that we are in the era of “big data”. It brings a lot of opportunities and challenges. Some of these challenges include capturing, analysis, storage, searching, sharing, visualization and privacy. This fast growth of heterogeneous data has revealed the need for adequate quality measurement indicator since quantifying data quality is essential in big data. The aim of this thesis is to focus on storage and retrieval challenge of big data and analysis how data quality for big data storage and retrieval can be quantified with regard to particular dimensions. Firstly, several requirements are stated for determining data quality factors. Secondly, we analysis quality measurement indicators in the literature and with respect to defined requirements. Then, new data quality model for big data storage and retrieval, which needs to meet defined requirements, are designed. The proposed data quality model will support all quality issues faced by big data storage and retrieval in order to have better quality storage. Thus, it is important to consider this proposal data quality model for big data storage and retrieval in the early phases of big data life cycle, so as to optimize quality of stored data. In the final section, we evaluate the proposed model based on existing methods and approaches.

Hamidreza Abbasi Niasar

MSc


Software Engineering

Thesis Title:

Thesis Abstract:

Parya Izadpanah

MSc


E-Commerce

Thesis Title:

Quality model of recommender systems based on gamefication

Abstract:

Quality is one of the most important issues in software domain and essential factor for acceptance of technologies and solutions. So Recommender Systems (RSs) as software tools and techniques that providing suggestions and their widely use in e-commerce need to evaluate in standardized way. In other hand according to growing need to recommender systems on Internet and needing involving customers for rating them, gamification can help to improve their functionionality.

Faezeh Alimadadi

MSc


Software Engineering

Thesis Title:

Thesis Abstract:

Fatemeh Ahmadi

MSc


Software Engineering

Thesis Title:

Big Data Storage and Retrieval Optimization based on a Quality Model

Thesis Abstract:

By the spread of using internet, social networks and different data sources, the importance of “Big Data” is increasing. One of the most important challenges in dealing with Big Data is “Storage and Retrieval”. So it is necessary to provide an efficient algorithm and evaluate it, for Big Data storage and retrieval optimization. In this thesis, we will select a quality model, then we will propose an algorithm to optimize big data storage and retrieval based on the selected quality model.