Tutorials

 
 
Tutorial 01

Title of the Tutorial: Intelligent Digital Image Processing Operators Based on Computational Intelligence Techniques

Mehmet

Dr. Mehmet Emin YUKSEL, Professor & Chairman, Department of Biomedical Engineering, Erciyes University, TURKEY

Summary 

Digital imaging is becoming more and more widespread in many different areas of science and technology. Even though the quality of digital imaging technologies increase every day, digital images are inevitably corrupted by noise during image acquisition and/or transmission due to a number of imperfections caused by image sensors and/or communication channels. In most image processing applications, it is of vital importance to remove the noise from the image data because the performances of subsequent image processing tasks (such as segmentation, feature extraction, object recognition, etc.) are severely degraded by the noise. A good noise filter is required to satisfy two conflicting criteria of (1) suppressing the noise while at the same time (2) preserving the useful information (edges, thin lines, texture, small details, etc.) in the image. Unfortunately, a great majority of currently available image filters cannot simultaneously satisfy both of these criteria. They either suppress the noise at the cost of distorting the useful information in the image, or preserve image information at the cost of reduced noise suppression performance.

In the last few years, there has been a growing research interest in the applications of computational intelligence techniques, such as neural networks and fuzzy systems, to the problems in digital image processing. Indeed, neuro-fuzzy (NF) systems offer the ability of neural networks to learn from examples and the capability of fuzzy systems to model the uncertainty, which is inevitably encountered in noisy digital images. Therefore, neuro-fuzzy systems may be utilized to design line, edge, and detail preserving filtering operators for processing noisy digital images.

In this tutorial, we will begin by a quick review of the fundamental concepts of fuzzy and neuro-fuzzy systems as well as their application to digital image data. Then, we will derive a generalized neuro-fuzzy (NF) based operator suitable for a range of different applications in image processing. Specifically, we will consider three different applications of the presented
NF operator: (1) noise filter, (2) noise detector and (3) edge extractor.

In the noise filter application, the NF operator will be employed as a detail-preserving noise filtering operator to restore digital images corrupted by impulse noise without degrading fine details and texture in the image. In the noise detector application, the NF operator will be employed as an intelligent decision maker and utilized to detect impulses in images corrupted by impulse noise. Hence, the NF operator will be used to guide a noise filter so that the filter will restore only the pixels that are detected by the NF operator as impulses, and leave the other pixels (i.e. the uncorrupted pixels) unchanged. Consequently, the NF operator will help reduce the undesirable distortion effects of the noise filter. In the edge extractor application, the NF operator will be used to extract edges from digital images corrupted by impulse noise without needing a pre-filtering of the image by an impulse noise filter.

In all of these applications, the same NF operator will be used for three different purposes. The fundamental building block of the NF operator to be presented is a simple 3-input 1-output NF subsystem. We will then show that highly efficient noise filtering, noise detection or edge extraction operators may easily be constructed by combining a desired number of simple NF subsystems within a suitable network structure. Following this, we will present a simple approach for training the NF operator for its particular target application. Specifically,  we will show that the internal parameters of the NF subsystems in the structure of the presented NF operator may adaptively be optimized by training, and the same NF operator may be trained as a noise filter, noise detector or an edge extractor depending only on the choice of the training images. We will further show that the NF subsystems may be trained by using simple artificial training images that can easily be generated in a computer. For each of the three applications of the presented NF operator, we will demonstrate the efficiency of the presented approach by appropriately designed simulation experiments and also compare their performance with a number of selected operators from the literature. We will complete the tutorial with a brief summary of other existing as well as potential applications of the presented general-purpose NF operator in image processing.

INTENDED AUDIENCE
Senior researchers and students who have some general background knowledge in signal processing and communications, and who are interested in computational intelligence techniques.

GOALS

Allow the audience to

  • understand the basic principles of computational intelligence methodologies, their advantages and disadvantages,
  • learn how to design and implement a general purpose neuro-fuzzy operator suitable for many different kinds of signal/image processing tasks,
  • learn how to customize this neuro-fuzzy operator for a specific signal/image processing task by training,
  • understand the other potential uses of computational intelligence based operators.

Bio: Mehmet Emin YUKSEL received his B.S. degree in electronics and communications engineering from Istanbul Technical University, Istanbul, Turkey, in July-1990. In February-1991, he joined the Dept. of Electrical and Electronics Eng., Erciyes University, Kayseri, Turkey. He received his M.S. and Ph.D. degrees in electronics engineering from Erciyes University in February-1993 and September- 1996, respectively. Between 1991 and 2011, Dr. Yuksel was a full time member of the Dept. of Electrical and Electronics Eng., Erciyes University, Kayseri, Turkey, with the exception of a period between March-1995 and December-1995 when he was with Signal Processing Section, Dept. of Electrical Engineering, Imperial College, London, UK. Currently, he is a professor at the Dept. of Biomedical Engineering, Erciyes University, Kayseri, Turkey. His general research interests include computational intelligence techniques, evolutionary computation and applications of these techniques in signal and image processing. Dr. Yuksel was the conference co-chair of IEEE SIU-2005 (IEEE 13th Signal Processing and Communication Applications Conference), local co-chair of HDM-2008 (International Conference on Multivariate Statistical Modeling and High Dimensional Data Mining), conference co-chair of INISTA-2010 (International Symposium on Innovations in Intelligent Systems and Applications), and local chair of IC-SMHD-2016 (International Conference on Information Complexity and Statistical Modeling in High Dimensions with Applications). He is a member of the Editorial Board of the International Journal of Reasoning-Based Intelligent Systems. He has served as a member of the technical committees of many national and international conferences. He is a Senior Member of the IEEE.


Tutorial 02

Title of the Tutorial: Text Mining and Biomedical Text Data Mining: Entity, and Relation Extraction

Mehmet

Dr. Jeyakumar Natarajan, Data Mining and Text Mining Lab., Dept. of Bioinformatics, Bharathiar University, India

Abstract: This tutorial is about text mining in general and biomedical text data mining in particular to extract named entity, and relation extraction form natural language text. The discipline text mining is evolved for automatic extraction new knowledge from published literature. Text mining is defined as the utilization of automated methods for the enormous amount of knowledge available in text documents. In biomedical sciences, besides experimental data, there is a substantial amount of biomedical knowledge recorded only in the form of free-text in research abstracts, full-text articles and clinical records etc. Machine learning algorithms are commonly applied to text mining applications. Text mining of biomedical literature has been applied successfully to various biological problems such as biomedical named entity recognition (e.g. genes and proteins names), entity relation extraction (e.g. protein-protein interactions, gene-disease relations) and event extraction (e.g. biomedical pathways and functions). The talk will introduce text mining basics, methodology, followed by various applications areas in biomedical domain.

Outline including a short summary of every section:

  • Introduction and overview of machine learning, text mining and biomedical text mining [45 minutes]

This section first introduces machine learning and its application in general text mining. This is followed by application of text mining in biomedical literature and biomedical literature resources such as PubMed, Full-text research articles and Clinical records etc will be presented.

  • Text Mining and Biomedical Text Data Mining and Components Tasks [45 minutes]

This section various component tasks of text mining which includes i) Named Entity recognition ii) Co-reference resolution ii) Template relation extraction, iii) Event extraction iv) Scenario template extraction will be presented. The two major component tasks of biomedical text mining (i.e.) biomedical named entity extraction and entity-relation extraction and its applications will be highlighted.

  • Biomedical Named Entity Extraction [30 minutes]

This presentation in this section includes overview of Biomedical Named Entities (e.g. gene, protein, diseases names etc), Protein/gene name identification methods such as rule based, lexicon based and machine learning based approaches and gold standard data sets and essential papers related to this task.

  • Biomedical Relation and Event Extraction [30 minutes]

This section presents the overview of Biomedical Relations (protein-protein interactions, gene-disease relations etc.), Relation extraction methods such as rule based, lexicon based and machine learning based approaches and gold standard data sets and essential papers of relation extraction task.

  • Demo of in-house developed text mining tools [30 minutes]

In this last section the on-line demo of following four in-house developed text mining tools will be presented. They are i) Named Entity Tagger NAGGNER with its algorithm/method ii) Co-reference Tagger ProNormz with its algorithm/method iii) Entity-Relation system PPInterFinder with its algorithm/method iv) event extraction system HPIminer with its algorithm/method. 

Target audience: The target audience is computer science and information technology graduate students and researchers who are interested to understand the basic principles behind text mining and wish to develop and use text mining systems for biomedical data analysis. All concepts will be introduced on an intuitive level, so a computational biologist or a computer scientist will be comfortable with the material.

Specific goals and objectives: The specific goal and objective and of this tutorial is introduce about the plethora of data in the form of text and other forms available on biomedical science and the current and future research opportunities excising on this domain to the computer science researchers.

Bio: Jeyakumar Natarajan is currently working as Professor at Dept, of Bioinformatics, Bharathiar University, Coimbatore, India. His Ph.D. is in Computational Biology from University of Ulster, United Kingdom where he is worked on developing data mining and text mining systems for protein-protein interactions and robust analysis of microarray data. He also holds post- doctoral work at Northwestern Medical School, Northwestern-University, Chicago, US. His research area is the intersection of computer science, biology, and computational linguistics. His current research activities focused on data mining, text mining and machine learning methods for biomedical data analysis and interpretation and other high-throughput data from genomics and proteomics. His other research interests include information retrieval, web mining, bio-ontologies, and ontology mining in bioinformatics. Jeyakumar is a frequent invited speaker on the above topics in various universities and research institutions across India.


Tutorial 03

Title of the Tutorial: Software Quality Predictive Modeling: An Effective Assessment of Experimental Data

Dr. Ruchika Malhotra, Assistant Professor, Department of Software Engineering, Delhi Technological University, Delhi, India

Abstract: Predictive modeling, in the context of software engineering relates to construction of models for estimation of software quality attributes such as defect-proneness, maintainability and effort amongst others. For developing such models, software metrics act as predictor variables as they signify various design characteristics of a software such as coupling, cohesion, inheritance and polymorphism. A number of techniques such as statistical and machine learning are available for developing predictive models. Hence, conducting successful empirical studies, which effectively use these techniques are important in order to develop models which are practical and useful. These developed models are useful to organizations in prioritization of constraint resources, effort allocation and developing an effective software quality product.
However, conducting effective empirical studies which develop successful predictive models is not possible if proper research methodology and steps are not followed. This tutorial introduces a successful stepwise procedure for efficient application of various techniques to predictive modeling. A number of research issues which are important to be addressed while conducting empirical studies such as data collection, validation method, use of statistical tests, use of an effective performance evaluator etc. are also discussed with the help of an example. The tutorial also provides future directions in field of software quality predictive modeling.

Outline: A major problem faced by software project managers is to develop good quality software products within tight schedules and budget constraints. Development of predictive models which can estimate various software quality attributes such as effort, change-proneness, defect-proneness and maintainability are important for project-managers so that they can focus their resources effectively and develop a software product of desired quality. But how do we use different available techniques such as statistical and machine learning effectively for model prediction? In order to answer this question, this tutorial discusses with the help of an example the research methodology for successful application of various techniques to software quality predictive modeling. It also explores various research issues in the field and provides future directions to enhance the use of software quality predictive models. The various sections of this tutorial are:

  • Research Methodology for Software Quality Predictive Modeling
  • Research Issues in Software Quality Predictive Modeling
  • Current Trends in Software Quality Predictive Modeling and
  • Future Directions in Software Quality Predictive Modeling

Target Audience: The tutorial is targeted at academic researchers and software practitioners who plan to develop models for predicting various software quality attributes. It effectively states the steps needed to perform an empirical study to investigate and empirically validate the relationship between various software quality attributes and OO metrics. The tutorial proposes efficient steps for doing replicated study or to analyze the relationship between various quality attributes and OO metrics.

Specific Goals and Objectives: The reasons for relevance of this tutorial is manifold. Empirical validation of OO metrics is a critical research area in the present day scenario, with a large number of academicians and research practitioners working towards this direction to predict software quality attributes in the early phases of software development. Thus, this tutorial explores the various steps involved in development of an effective software quality predictive model using a modeling technique with an example data set. Performing successful empirical studies in software engineering is important for the following reasons:

  • To identify defective classes at the initial phases of software development so that more resources can be allocated to these classes to remove errors and thus the cost of correcting the error is minimized as it is eliminated at an earlier stage.
  • To analyze the metrics which are important for predicting software quality attributes and to use them as quality benchmarks so that the software process can be standardized and delivers effective products.
  • To efficiently plan testing, walkthroughs, reviews and inspection activities so that limited resources can be properly planned to provide good quality software.
  • To use and adapt different techniques (statistical, machine learning & search-based) in predicting software quality attributes.
  • To analyze existing trends for software quality predictive modeling and suggest future directions for researchers.
  • To work towards consistently improving the quality of the resulting OO software processes and products.
  • To document the research methodology so that effective replicated studies can be performed with ease.

It is important to document and state effective research methodology for use of different techniques for software quality predictive modeling so that efficient empirical studies can be performed which are of practical relevance. Thus, the tutorial presents a complete and repeatable research methodology.

Bio: Ruchika Malhotra is an assistant professor in the Department of Software Engineering, Delhi Technological University (formerly Delhi College of Engineering), Delhi, India. She has been awarded prestigious UGC Raman Postdoctoral Fellowship by the Indian government for pursuing postdoctoral research from the Department of Computer and Information Science, Indiana University-Purdue University Indianapolis (2014-15), Indianapolis, Indiana, USA. She received her master's and doctorate degree in software engineering from the University School of Information Technology, Guru Gobind Singh lndraprastha University, Delhi, India. She was an assistant professor at the University School of Information Technology, Guru Gobind Singh Indraprastha University, Delhi, India. She received the prestigious IBM Faculty Award 2013. She is author of the book titled Empirical Research in Software Engineering: Concepts, Analysis and Applications and co-author of a book Object Oriented Software Engineering. Her research interests are in empirical research in software engineering, improving software quality, statistical and adaptive prediction models, software metrics and software testing. Her H-index as reported bv Google Scholar is 18. She has published more than 120 research papers in international journals and conferences. She can be contacted by e-mail at ruchikamalhotra2004@yahoo.com.


Tutorial 04

Title of the Tutorial: Are you safe on your browsers? Cyber attacks and spying using malicious browser extensions

Mr. Gaurav Varshney, Research Scholar, Information Security Lab, Department of CS, IIT Roorkee

Abstract: There has been an immense utilization of browser extensions now a day for providing additional functionalities to users over the basic browser functionalities. In the recent times it has been identified that malicious browser extensions are allowing the attackers to carry out cyber frauds, cyber spying over targeted users using malicious browser extensions. This tutorial practically demonstrates the vulnerabilities that are exploited by malicious extensions and possible attacks that can be launched via attackers. This tutorial will provide browser developers and security researchers an insight into the current security vulnerabilities to patch them with improved designs in the near future to avoid malicious extension based attacks.

Outline including a short summary of every section:
1. Introduction of browser extensions
2. Chrome browser extension execution architecture as case study
3. Cyber frauds via malicious extensions (Practical)
    a) Phishing  b) Affiliate Fraud   c) Webpage Manipulation
4. Cyber spying via malicious extensions (Practical)
    a) Sniffing users email data  b) Sniffing users form data    c) Key loggers over browser
5. Botnet based attacks via malicious extensions
        Using malicious extensions as a bot for launching DDoS attacks
6. Discussion about the security flaws and recent research proposals
7. Identifying research gaps and throwing future research directions

Intended audience: Security researchers both from industry and academia, B. Tech, M. Tech, PhD students interested in research in the area of cyber security. People working in cyber forensics.

 
Specific goals and objectives: Showcase the current practices used by fraudsters to do cyber frauds and cyber spying with the help of malicious browser extensions.

Bio: Mr. Gaurav Varshney is a PhD student at IIT, Roorkee. He is currently a Visiting Scholar (with Prof. Pradeep Atrey) at the Dept. of Computer Sc., SUNY Albany. His research interests include securing users against cyber spying and developing advanced authentication schemes for thwarting phishing attacks. He has done his Master’s from IIT, Roorkee in 2012 in the area of phishing prevention schemes. Gaurav has also worked with Qualcomm, India (2012-14) as an Engineer and done internships at TRDDC labs, Pune India and Pure Testing private limited Noida.


Title of Tutorial: Feature Based Image Segmentation and Classification Techniques using Random Forests 

Dr. Kumar Rajamani, Architect, Robert Bosch, Bangalore 

Abstract: This talk presents some of the recent approaches for image classification and segmentation. Segmentation tasks are very challenging especially in the medical imaging context. The recent advances in feature extraction and classification makes some of the challenging problems tractable. First a brief overview of some of the recent feature extraction techniques is presented. This is followed by insights into Random Forest classifier. Finally a interactive training application ‘ilastik’ is explained. ilastik provides real-time feedback of the current classifier predictions and thus allows for targeted training and overall reduced labeling time. In addition, an uncertainty measure can guide the user to ambiguous regions of the data. Once the classifier has been trained on a representative subset of the data, it can be exported and used to automatically process a very large number of images.

Bio: Dr. Kumar Rajamani is Architect at Robert Bosch Engineering and Business Solutions. Prior to joining Bosch he spent three years at GE Global Research (GRC), with the Medical Image Analysis lab. He was primarily involved in quantitative imaging of cancer. Earlier to GE, Kumar was Senior Scientist at Philips Research and Head of IT-Department at Amrita University. His research focus includes Medical Image Analysis, Health care technologies for emerging markets. He has seven patents filings. Kumar completed his Ph.D. in Biomedical Engineering from University of Bern, Switzerland. 


Zircon - This is a contributing Drupal Theme
Design by WeebPal.