Please visit the latest ICACCI2015 website

ICACCI 2014 has been indexed in Scopus

ICACCI-2014 Delhi Photos available online


Protocols for Internet of Things

Dr. Mukesh Taneja, Cisco Systems, Bangalore, India

IEEE 802.11ah, an Enabling Technology for the Internet of Things. How Does It Work?

Dr. Evgeny Khorov,  Senior Researcher, MIPT and IITP RAS, Russia

QoS and QoE in the Next Generation Networks and Wireless Networks

Dr. Pascal Lorenz, University of Haute Alsace, France

Implementing a Private Cloud Environment with the use of Open Nebula and Virtual Box

Ms. Sanchika Gupta, IIT Roorkee, India
Mr. Gaurav Varshney,  Qualcomm India

NoSQL Databases

Mr. G C Deka, Ministry of Labour & Employment Govt. of India

Design Automation for Quantum Computing Circuits

Dr. Amlan Chakrabarti, University of Calcutta, India

A Methodology for Architecture-Based Software Reliability Analysis

Dr. Veena Mendiratta, Bell Labs at Alcatel-Lucent in Naperville, Illinois, USA
Dr. Swapna S. Gokhale, University of Connecticut, USA


Watermarking techniques for scalable coded image and video authentication

Dr. Deepayan Bhowmik, Heriot-Watt University, Edinburgh, UK
Dr. Arijit Sur, Indian Institute of Technology, Guwahati, India

Design-for-testability automation of analog and mixed-signal integrated circuits

Dr. Sergey Mosin, Vladimir State University, Vladimir, Russia

Systems Safety, Security & Sustainability

Prof. Ali Hessami,  Vega Systems, UK

High Efficiency Video Coding

Mr. Shailesh Ramamurthy, Arris India

Network Security and Beyond: Network Anomaly Detection in the Field

Dr. Christian Callegari, University of Pisa, Italy

Web Application Security

Manu Zacharia, C|EH, C|HFI, CCNA, MCP, Certified ISO 27001-2005 Lead Auditor, MVP-Enterprise Security(2009-2012), ISLA-2010 (ISC)2


Protocols for Internet of Things
Mukesh Taneja
Dr. Mukesh Taneja
Cisco Systems, Bangalore, India
Length of the tutorial: 2-3 hours

Abstract: More things are connecting to the Internet than people — over 12.5 billion devices in 2010 alone. 50 billion devices are expected to be connected by 2020. Yet today, more than 99 percent of things in the physical world remain unconnected. How will having lots of things connected change everything? The growth and convergence of processes, data, and things on the Internet will make networked connections more relevant and valuable than ever before, creating unprecedented opportunities for industries, businesses, and people. The Internet of Things (IoT) is the next technology transition when devices will allow us to sense and control the physical world. It is also part of something even bigger. The Internet of Everything (IoE) is the networked connection of people, process, data, and things. Its benefit is derived from the compound impact of these connections and the value it creates as "everything" comes online. IoT solutions on devices, gateways and infrastructure nodes include the following: connectivity layer (such as that provided by networks that use IEEE802.15.4, LTE/3G/2G, WiFi, Ethernet, RS485, Power Line Communication and IP based protocols), service layer (middleware such as being specified by oneM2M) and application layer.

This tutorial provides an overview of several IoT technology components for consumer as well as industrial IoT segments. It starts with an overview of some of the application layer protocols such as CoAP, XMPP, Modbus and DNP3. For IoT solutions built using IEEE802.15.4 type of mesh networks, it explains certain mechanisms of 802.15.4 MAC including the TSCH (Time Slotted Channel Hopping) mode, 6LowPAN (IPv6 over low power Wireless Personal Area Networks), ROLL (routing over low power lossy networks) and Security. Next, it provides an overview of IoT activities related to WiFi, IEEE WAVE (Wireless Access in Vehicular Environment) and 3GPP. Big Data and Analytics related work is also considered. In the end, some of the research challenges are highlighted.


  • IoT: Introduction
    • Definition of IoT / IoE and some use cases
    • High level architecture
  • IoT: Application Layer Protocols
    • CoAP, XMPP, Modbus and DNP3
  • IoT: Networking related topics
    • IEEE802.15.4, TSCH, ROLL, 6LowPAN, Security
    • IEEE Wireless Access in Vehicular Networks
    • 3GPP and WiFi Enhancements for IoT
    • Power Line Communication
    • Deterministic Ethernet
  • IoT Middleware
  • Big Data and Analytics
  • Research Challenges

Bio: Dr. Mukesh works as a Principal Engineer with Cisco India. He has been working in the areas of Wireless Systems, Internet of Things and Analytics. During his 17+ years in industry, he has worked on commercial LTE/3G/WiFi/IP based wireless products, led technology incubation projects and participated in (3GPP, oneM2M and IEEE802 - wireless) standardization work. Mukesh got his PhD from University of California San Diego in 1998, ME from IISc Bangalore in 1993 and BE from BITS, Pilani in 1989. He has also completed an Executive General Management Program from IIM, Bangalore.


IEEE 802.11ah, an Enabling Technology for the Internet of Things. How Does it Work?

Mukesh Taneja

Dr. Evgeny Khorov
Senior Researcher, MIPT and IITP RAS, Russia

Abstract:Smart technologies play a key role in sustainable economic growth. They transform houses, offices, factories, and even cities into autonomic, self-controlled systems acting often without human intervention and thus sparing him doing routine caused by information collecting and processing. Some analysts forecast that by 2020 the total number of smart devices connected together in a network, called Internet of Things (IoT), will reach 50000000000. Apparently, the best way to connect such a huge number of devices is wireless. Unfortunately, the state-of-the-art wireless technologies cannot provide connectivity for such a huge number of devices, most of which are battery supplied. 3GPP, IEEE and other international organizations are currently adapting their standards to the emerging IoT market. For example, the IEEE 802 LAN/MAN Standards Committee (LMSC) has formed IEEE 802.11ah Task Group (TGah) to extend the applicability area of the IEEE 802.11 networks by designing an energy efficient protocol allowing thousands of indoor and outdoor devices working at the same area. In this tutorial, we will focus on very promising revolutionary changes introduced by TGah and adopted in November 2013 as the first draft standard of the Low Power Wi-Fi (IEEE 802.11ah) technology. From the tutorial, you will learn how IEEE 802.11ah operates. Also we will pay attention to some research challenges in this area.

Bio: Dr. Evgeny Khorov is a Senior Researcher in the Network Protocols Research Lab, Institute for Information Transmission Problems, Russian Academy of Sciences. In parallel, he lectures on Wireless Networking Protocols, and Mathematical Modeling of Wireless Networks in the Moscow Institute of Physics and Technology, a leading Russian University. His current research interests include Internet of Things, channel access, QoS provisioning, multi-hop wireless networks, performance evaluation. He has developed several mathematical models of networking protocols. Also he is a co-author of routing protocols developed for scalable mesh and military networks. He has been involved in several national and international research projects. Apart from that, he participates in the IEEE 802 LMSC standardization activities. Evgeny Khorov has more than 30 research papers, which have received several awards at the international conferences (e.g. Best Paper Award at IEEE ISWCS-2012, Paris). Also Evgeny Khorov has been awarded the 2013 Moscow Prize in the field of Telecommunications for the study of channel access methods in multi-hop wireless networks. He is Executive Chair of WiFlex 2013 and Co-chair of ITaS 2014. Also he serves as a reviewer of high-reputed scientific journals.


QoS and QoE in the Next Generation Networks and Wireless Networks

Mukesh Taneja
Dr. Pascal Lorenz

University of Haute Alsace, France

Abstract: Emerging Internet Quality of Service (QoS) mechanisms are expected to enable wide spread use of real time services such as VoIP and videoconferencing. Quality of experience (QoE) is a subjective measure of a customer's experiences with a service. The "best effort" Internet delivery cannot be used for the new multimedia applications. New technologies and new standards are necessary to offer QoS/QoE for these multimedia applications. Therefore new communication architectures integrate mechanisms allowing guaranteed QoS/QoE services as well as high rate communications. The emerging Internet QoS architectures, differentiated services and integrated services, do not consider user mobility. QoS mechanisms enforce a differentiated sharing of bandwidth among services and users. Thus, there must be mechanisms available to identify traffic flows with different QoS parameters, and to make it possible to charge the users based on requested quality. The integration of fixed and mobile wireless access into IP networks presents a cost effective and efficient way to provide seamless end-to-end connectivity and ubiquitous access in a market where the demand for mobile Internet services has grown rapidly and predicted to generate billions of dollars in revenue.

This tutorial covers to the issues of QoS provisioning in heterogeneous networks and Internet access over future wireless networks as well as ATM, MPLS, DiffServ, IntServ frameworks. It discusses the characteristics of the Internet, mobility and QoS provisioning in wireless and mobile IP networks. This tutorial also covers routing, security, baseline architecture of the inter-networking protocols, end to end traffic management issues and QoS for Mobile/Ubiquitous/Pervasive Computing users.


  • Concepts of the QoS/QoE
    • Traffic mechanisms, congestion
    • Generations of Internet
  • Mechanisms and architectures for QoS
  • ATM networks (IP over ATM, WATM)
  • New communication architectures
  • Mechanisms allowing QoS
    • MPLS
    • DiffServ
    • IntServ
  • QoS in Wireless Networks
    • Mobile Internet applications
    • Quality for Mobile/Ubiquitous/Pervasive Computing users in gaining network access
    • and satisfying their service requirements
    • Mobile, satellites and personal communications
    • Mobile and wireless standardization IEEE 802.11, IEEE 802.16, IEEE 802.20
    • WLL, WPAN, WLL

Bio: Pascal Lorenz received his M.Sc. (1990) and Ph.D. (1994) from the University of Nancy, France. Between 1990 and 1995 he was a research engineer at WorldFIP Europe and at Alcatel-Alsthom. He is a professor at the University of Haute-Alsace, France, since 1995. His research interests include QoS, wireless networks and high-speed networks. He is the author/co-author of 3 books, 3 patents and 200 international publications in refereed journals and conferences. He was Technical Editor of the IEEE Communications Magazine Editorial Board (2000-2006), Chair of Vertical Issues in Communication Systems Technical Committee Cluster (2008-2009), Chair of the Communications Systems Integration and Modeling Technical Committee (2003-2009) and Chair of the Communications Software Technical Committee (2008-2010). He has served as Co-Program Chair of IEEE WCNC'2012, ICC'2004 and ICC'2017, tutorial chair of VTC'2013 Spring and WCNC'2010, track chair of PIMRC'2012, symposium Co-Chair at Globecom 2011-2007, ICC 2010-2008 and ICC'2014. He has served as Co-Guest Editor for special issues of IEEE Communications Magazine, Networks Magazine, Wireless Communications Magazine, Telecommunications Systems and LNCS. He is senior member of the IEEE, IARIA fellow and member of many international program committees. He has organized many conferences, chaired several technical sessions and gave tutorials at major international conferences. He was IEEE ComSoc Distinguished Lecturer Tour during 2013-2014.


Implementing a Private Cloud Environment with the use of Open Nebula and Virtual Box                    

Mukesh Taneja
Ms. Sanchika Gupta
IIT Roorkee, India
Mukesh Taneja
Mr. Gaurav Varshney

Qualcomm India

Length of the tutorial: Half-day

The tutorial will give a detailed description of Cloud and its services and will provide a practical demo of how a private Cloud can be built with Open nebula and Virtual box. The description of Cloud and its security aspects with known attacks and vulnerabilities will also be briefed with the explanation of existing solutions for it. A remote desktop session will also be provided to the audience to see an already existing implementation of a private Cloud at the end of the tutorial. An Intrusion detection approach will also be discussed for securing cloud from file integrity, malwares and DDOS attacks.


  • Introduction to Cloud:
  • Describing Cloud and why it is needed and what services it provides.
  • Architecture of Cloud and what are the minimal resource needs for deployment of a private Cloud.
  • Introduction to Private Cloud using open nebula and virtual box:
  • Introduction to Private Cloud Deployment with the use of Open Nebula with Virtual Box virtualization environment.
  • Design of private Cloud using Open Nebula and Virtual Box.
  • Implementation details:
  • A step by step guide to implement a private Cloud.
  • Remote session to one of the implemented private Cloud using Open Nebula and Virtual Box
  • Discussion of Cloud Security Aspects
  • Cloud attacks and analysis.
  • Discussion of a Complete and Lightweight Intrusion Detection at Cloud.

Bio: Sanchika Gupta is one of the researchers working in the area of Lightweight intrusion detection scheme for Cloud Computing Environment. She has completed her Master’s from Thapar University before joining IIT Roorkee as a Research Scholar. She is among the people who have deployed their own private Cloud set up at IIT Roorkee and using them for their work and analysis. She had also written Apps one of which is available in Google Web Store named S|PP|S for Securing people from online web frauds. She had attended many national and international conferences and has publications in many good journals as well.

Gaurav Varshney is as an Engineer at Qualcomm. He got his Master’s from IIT Roorkee in the area of Information security specifically Phishing Prevention Schemes. His research interest includes Email and Phishing Prevention Schemes with focus on Cyber frauds and Intelligence. His current research interests are on Security of Mobile devices. He has publications in Phishing prevention schemes, Threat Modeling, Network Analysis in leading Conferences and Journals.


NoSQL Databases

Mukesh Taneja
Mr. G C Deka, Assistant Director, Ministry of Labour & Employment, Govt. of India

Length of the tutorial: Three Hours

Abstract: Distributed data replication and partitioning are the two fundamentals to sustain enormous growth in data volume, velocity and value in the cloud. In a traditional database cluster, data must either replicate across the members of the cluster or partitioned between them. Shipping data manually to distant cloud servers are time-consuming, risky and expensive process and hence network is the best option for data transfer among distributed and diverse database systems. Relational databases are difficult to dynamically and efficiently provision on demand to meet cloud requirement. So NoSQL databases are a new breed of databases in order to overcome all the identified limitations and drawbacks of RDBMS. The goal of NoSQL is to provide scalability, availability and meet other limitations of RDBMS for cloud computing.

The common motivation of NoSQL design is to meet scalability and failover. In most of the NoSQL database systems, data is partitioned and replicated across multiple nodes. Inherently, most of them use either Google's MapReduce or Hadoop Distributed File System or Hadoop MapReduce for data collection. Cassandra, HBase and MongoDB mostly used and they can be termed as the representative of NoSQL world. CAP theorem states that optimization for only 2 out of 3 priorities of a distributed database i.e. Consistency (C), Availability (A), and Partition Tolerance (P) are possible leading to combinations of CA, CP and AP. There are a number of NoSQL databases with different features and functionality. This tutorial discusses 10 popular NoSQL databases under 5 categories for CAP analysis.


  • Introduction to cloud computing (Historical Background 10 slides)
  • NoSQL (5 Categories of NoSQL 10-15 slides)
  • Discussion of Cloud Database (with a focus on CAP theorem 5-10 slides)
  • CAP analysis of 10 popular databases (10-20 slides)
  • Practical session on MongoDB


Design Automation for Quantum Computing Circuits

Mukesh Taneja
Dr. Amlan Chakrabarti

A. K. Choudhury School of Information Technology, University of Calcutta, India

Length of the tutorial: Half/Full day

Abstract: Harnessing the power of quantum mechanical properties of atomic and sub-atomic particles to perform useful computation creates the new paradigm of quantum computation. The motivation of quantum computing was initiated by pioneers like Richard Feynman and Charles H. Bennett. Though new, quantum computing has created lot of excitement amongst computer scientists due to its power in solving some important computational problems faster compared to the present day classical machines. The quantum phenomenon like superposition, interference and entanglement are the key players in enabling quantum machines to outperform the classical machines. Quantum algorithms can be applied in a variety of applications to name a few, systems of linear equations, number theory, database search, physical simulation, chemistry and physics etc. Quantum algorithms are usually described in the commonly used circuit model of quantum computation, which acts on some input quantum state and terminates with a measurement.

This tutorial will give an overview of quantum computing algorithms and circuits with a brief insight on the design automation for quantum circuit design. The key steps involved in the quantum circuit design for a given quantum algorithm for the different target quantum technologies will be addressed in this lecture.


1. Introduction

Why Quantum Computing?

Classical vs. Quantum

Key aspects of power of Quantum Computing

QC Technologies of today

2. How to design Quantum Computers

3. Quantum Logic

Basic Gates

Universal set of gates

Reversible Logic

Quantum gate cost model

Circuits for Quantum Algorithms

4. Design Automation for Quantum Circuit Synthesis

Quantum Algorithm Description (QCL)

Quantum Assembly Format (QASM)

Logic Synthesis

- Reed-Muller Synthesis

- Multi-Controlled Toffoli Synthesis

- Nearest Neighbor Synthesis

Technology Mapping

- PMD specific identities

- PMD specific Fault Tolerant Quantum Logic Synthesis

Quantum Error Correcting Codes

Layout for Quantum Circuits

Bio: Dr. Amlan Chakrabarti is at present an Associate Professor and Coordinator at the A. K. Choudhury School of Information Technology, University of Calcutta. He is also the Principal Investigator of the Center of Excellence in Systems Biology and Bio-Medical Engineering, University of Calcutta and also the Coordinator of the Integrated Circuits and System Design Research Facilities of University of Calcutta. He is an M.Tech from the University of Calcutta and has done his Doctoral research on Nano-computing and Nano-scale VLSI design at Indian Statistical Institute, Kolkata, 2004-2008. He was a Post-Doctoral fellow at the prestigious School of Engineering, Princeton University, USA during 2011-2012. He is the recipient of BOYSCAST fellowship award in the area of Engineering Science from the Department of Science and Technology Govt. of India, 2011. He is a Sr. Member of IEEE and has been the reviewer of IEEE Transactions on Computers, IET Computers & Digital Techniques, Elsevier Simulation Modelling Practice and Theory, Springer Journal of Electronic Testing: Theory and Applications. He has published around 60 research papers in referred journals and conferences and has presented around 30 invited lectures in international and national venues. His research interests are: Quantum Computing, VLSI design, Embedded Systems Design, Image and Video Processing Algorithms and Architectures.


A Methodology for Architecture-Based Software Reliability Analysis

Mukesh Taneja
Dr. Veena Mendiratta

Bell Labs at Alcatel-Lucent in Naperville, Illinois, USAville, Illinois, USA

Mukesh Taneja
Dr. Swapna S. Gokhale
Dept. of Computer Science and Engineering, University of Connecticut, USA

Abstract:The critical dependence of our society on the services offered by software systems places a heavy premium on their reliability. An important step in achieving high reliability in a software system is systematic reliability analysis at the architectural level. Such analysis should consider customer usage patterns (operational profile), component reliability, system architecture, and deployment of the components across hardware hosts. While one of the outcomes of this analysis is a prediction of the system reliability, the more important outcomes are an assessment of the sensitivity of the system reliability to its components’ attributes and the identification of components that are critical from a reliability perspective. These components can then be targeted for reliability enhancement, so that the desired system reliability targets can be achieved in a cost-effective manner.

In this tutorial we will first present an overview of the different types of software reliability models depending on the phase of the development cycle when the model is developed and used to set the context for the architecture-based reliability models. The main part of the tutorial will focus on: a hierarchical, two-tier methodology to analyze the reliability of a software system. The methodology partitions the analysis into two steps. In the first step, the reliability of a service offered by the system is obtained by composing the reliabilities of its components within the context of its architecture. Service reliability will also consider the co-location and deployment configurations of the system components. Service reliability analysis will be conducted by mapping the message flow that occurs among the different components of a system to a Markov model. In developing the service reliability analysis methodology, we draw and build upon our extensive recent work in the area of architecture-based software reliability analysis. In the second step, the system-level reliability is obtained by composing the service reliabilities obtained from the first step in conjunction with the customer usage patterns and service distributions. The methodology thus considers the impact of several diverse aspects that influence system reliability namely, component failures, component interactions, deployment configurations, and customer usage scenarios in an integrated manner. Finally, we will discuss how the methodology can be used to allocate reliabilities to system components, based on the expected end-to-end service reliability targets, considering the architectural characteristics of the services.

Once the reliability budget of the components is determined, the next step is to determine how the target component reliabilities can be achieved. To understand the factors that influence component reliabilities, we partition each component into three sub-components, namely, hardware, middleware and application software. Since the hardware and middleware are typically expected to be highly reliable, we will focus on the factors that affect the reliability of the software sub-component. Subsequently, we will discuss how the reliability allocated to the software sub-component can be used to guide the selection of a combination of testing, restart and repair strategies. Finally, we will discuss approaches for obtaining reliability data (model inputs) for the software components.

We will illustrate the use of the methodology to gain insights into the influence of different parameters on system reliability using two examples. The first example is the Integrated Multimedia Subsystem (IMS), which is a standardized Next Generation Networking (NGN) architecture for telecom providers for offering mobile and fixed multimedia services. The second example will involve a virtualized application deployed on a cloud platform. Examples of implementing such models in the SHARPE modeling tool will also be shown.


  • Introduction and Motivation
  • Examples of major software failures
  • Overview of various software reliability models
  • Basics of architecture-based analysis and modeling
  • Impact of uncertain parameters
  • Example 1: Reliability analysis of IMS application
  • Example 2: Reliability analysis of virtualized application

Bio: Dr. Swapna S. Gokhale is an Associate Professor in the Dept. of Computer Science and Engineering at the University of Connecticut. She received her M. S. and Ph. D. in Electrical and Computer Engineering from Duke University in 1996 and 1998 respectively, and her B. E. (Hons.) in Electrical and Electronics Engineering and Computer Science from Birla Institute of Technology and Science, Pilani India, in 1994. Prior to UConn, she was a Research Scientist at Telcordia Technologies and a Post Graduate Researcher at the University of California, Riverside. Her research interests lie in performance and dependability analysis of computer systems, mining of social network and web log data, and software engineering education. She has published over 150 conference and journal papers on these topics. She is elected a Senior Member of the IEEE and a recipient of the Best Paper award at several international conferences. She received the National Science Foundation CAREER award to support her research in architecture-based software reliability analysis.

Dr. Veena Mendiratta is the Practice Lead for Network Reliability and Analytics in Bell Labs at Alcatel-Lucent in Naperville, Illinois. She began her career at AT&T Bell Labs in 1984. Her work has focused on the reliability and performance analysis for telecommunications systems products, networks, and services to guide system architecture solutions, and on telecom data analytics. She has led projects to develop anomaly prediction algorithms for wireless networks and customer experience analytics using data mining and social network analysis techniques. Current work includes using data mining methods for improving performance of wireless networks and on cloud reliability engineering for telecom applications. She is a member of INFORMS and elected a Senior Member of IEEE. Dr. Mendiratta received a B.Tech in engineering from the Indian Institute of Technology, New Delhi, India and a PhD in operations research from Northwestern University, USA. She is an Adjunct Professor in the MS in Analytics program at Northwestern University.


Watermarking Techniques for Scalable Coded Image and Video Authentication

Mukesh Taneja
Dr. Deepayan Bhowmik
Vision Lab, Institute of Sensors and Systems, Heriot-Watt University, Edinburgh, UK

Mukesh Taneja
Dr. Arijit Sur

Computer Science Department, Indian Institute of Technology Guwahati, India

Length of the tutorial: Half-day (3 hrs)

Abstract: Due to the increasing heterogeneity among the end user devices for playing multimedia content, scalable image and video communication attracts significant attention in recent days. Such advancements are duly supported by recent scalable coding standards for multimedia content coding, i.e., JPEG2000 for images, MPEG advanced video coding (AVC)/H.264 scalable video coding (SVC) extension for video, and MPEG-4 scalable profile for audio. In scalable coding, high-resolution content is encoded to the highest visual quality and the bit-streams are adapted to cater various communication channels, display devices, and usage requirements. However, protection and authentication of these contents are still challenging and not surprisingly attracts attention from researchers and industries. Digital watermarking, which has seen considerable growth in last two decades, is proposed in the literature as a solution for scalable content protection and authentication. Watermarking for scalable coded image and video, faces unique set of challenges with scalable content adaptation. The tutorial will share various research problems and solutions associated with the image and video watermarking techniques in this field. This tutorial will help the participants to understand 1) the image and video watermarking and its properties, 2) watermarking strategies for scalable coded image and video, and 3) lastly, the recent developments and the open questions in this field.


  • Digital watermarking (properties and applications) and frequency domain transforms used in watermarking, e.g., Discrete wavelet transform (DWT) (25 mins)
  • Scalable image and video coding and its application in multimedia signal processing (25 mins)
    • JPEG2000, MJPEG2000, MC-EZBC, and MPEG-AVC / H.264-SVC image and video coding
  • Research techniques for image watermarking for JPEG2000 content adaptation (40 mins)
    • Imperceptibility issues
    • Robustness issues
  • Research techniques for video watermarking for content adaptation (50 mins)
    • Imperceptibility issues (particularly flicker in video watermarking)
    • Robustness issues
    • Real-time watermarking
  • Recent developments and open questions in this field (20 mins)

Bio: Dr. Deepayan Bhowmik is a research associate in the Vision Lab within the Institute of Sensors and Systems at Heriot-Watt University, Edinburgh. He received the B.E. in Electronics Engineering from Visvesvaraya National Institute of Technology (VNIT), Nagpur, India, the M.Sc. in Electronic and Information Technology from Sheffield Hallam University, UK and PhD from The University of Sheffield, UK. Deepayan received prestigious EPSRC/British Petroleum Dorothy Hodgkin Post Graduate Award (DHPA) for his PhD Study. Previously, he worked as research associate at The University of Sheffield, UK and as a system engineer in ABB Ltd., India. He has authored more than 20 peer reviewed research papers which have appeared in ACM MMSEC, IEEE-ISCAS, Springer IWDW, SPIE Electronic Imaging, and IET-IPR etc. His current research interests include programmable embedded image processor architecture, vision based crowd behaviour understanding, person tracking, image and video forensics etc.

Dr. Arijit Sur is an assistant professor in Computer Science Department at Indian Institute of Technology, Guwahati, India. He received his Ph.D. degree in Computer Science and Engineering from Indian Institute of Technology Kharagpur. He has received his M.Sc. in Computer and Information Science and M. Tech in Computer Science and Engineering, both from Department of Computer Science and Engineering, University of Calcutta. Dr. Sur received Institute Scholarship (IIT Kharagpur) and Infosys Scholarship for his PhD study. He also got Microsoft Outstanding Young Faculty Programme Award at Dept. of CSE, IIT Guwahati. He has published more than 25 peer reviewed international & national conference and journal papers and has been successful in securing number of project grants on multimedia security. His research interests include Multimedia Security e.g., Image and Video Watermarking, Reversible Data Hiding; Steganography & Steganalysis for Image and Video; and Scalable Video Streaming.


Design-for-Testability Automation of Analog and Mixed-Signal Integrated Circuits

Mukesh Taneja
Dr. Sergey Mosin

Vladimir State University, Vladimir, Russia

Abstract: Testing takes an important place in the processes of electronic circuits design and implementation. About 40-60% of total time required to IC development is spent on test procedures. According to the “rule of ten” the cost of testing is increased tenfold at each next manufacturing cycle. High expenditure is dealt with increasing a complexity of IC and complication of test related efforts. Therefore the increasing of efficiency in test preparation and realizing for analog and mixed-signal integrated circuits is actual task. The following factors provide complexity of IC testing: changes of technological processes, increasing the scale of integration, high functional complexity of developed devices, limited access to internal components of IC, etc. The reduction of test cost (both time and money) may be providing at development and application new efficient test strategies. Design-for-Testability (DFT) is one of the perspective approaches, which ensures a selection of proper test solution already at early stages of IC design.

This tutorial will focus on design-for-testability automation issues. The four key processes of design-for-testability automation will be considered: simulation, test generation, testing sub-circuits generation and decision making. Simulation provides calculation of main parameters and characteristics for a designed circuit using sets of mathematical models and methods for numerical modeling of electronic circuits. Test generation provides a selection of controlled parameters, test nodes and test stimuli for a designed circuit, fault dictionary construction and efficiency estimation for obtained test patterns. Testing sub-circuits generation provides selection and inclusion in original circuit some test structures (DFT-solution) for analog and digital sub-circuits ensuring reduction of test complexity for a manufactured mixed-signal integrated circuit on the whole. Decision making provides comparison of proposed DFT-solutions based on cost model and fault coverage, and selection the reasonable DFT-solution for a designed IC taking into account such features as used integrated technology for manufacturing, volume of production, chip area, wafer effective radius, etc.


  • Importance of IC testing:

Defects and faults. The role and place of testing in life cycle of IC and system. Verification, testing and diagnosis. Rule of ten. The mission of design-for-testability.

  • Methodology of design-for-testability automation:

Design flow of analog and mixed-signal IC. Design-for-testability automation. Selection of test nodes. Selection of test signals and test patterns. Fault dictionary construction. Econometrical (cost) models. Criteria of selecting the testing circuitries for analog and digital sub-circuits.

  • Approaches to testing analog sub--circuits:

Increasing observability and controllability of internal nodes by application in-circuit mux and dmux. Oscillation built-in self-test technique. Signature analyzer. Test bus IEEE 1149.4.

  • Approaches to testing digital sub-circuits:

Built-in self-test: LFSR, MISR, BILBO. Test bus IEEE 1149.1

Bio: Dr. Sergey Mosin received the Ph.D. degree and defended the D.Sc. degree in computer engineering from Vladimir State University in 2000 and 2013 respectively. He is an Associate Professor. He was acting head of computer engineering department and vice-rector for research of Vladimir State University from 2006 to 2012. He has published over 100 papers in CAD and VLSI design. His research interests focus on the design and test of mixed-signal integrated circuits, design automation and CAD tools. Dr. Sergey Mosin served as a Publicity Chair of IEEE East-West Design and Test Symposium. He is a member of the IEEE and the Test Technology Technical Council (TTTC) of the IEEE Computer Society.


Systems Safety, Security & Sustainability

Mukesh Taneja
Prof. Ali Hessami

Vega Systems, UK

Length of the tutorial: Half-day

Scope: The incessant demand for better value, increased functionality and enhanced quality underlies the drive towards innovation and exploitation of emerging technologies. Whilst these bring a mixed bag of desirable properties in modern products, services and systems, they are often accompanied by complexity, uncertainty and risk. The performance of products, services, systems and undertakings is a measure of their utility, output and perceived or real emergent properties. The key facets to performance are technical, reliability/availability, commercial, safety, security/vulnerability, environmental/sustainability, quality & perceived value/utility.

Whilst the above dimensions are reasonably distinct and often inter-related, the key differentiation between safety and security aspects is broadly as follows; safety is freedom from harm to people caused by unintentional or random/systematic events whilst security is freedom from loss caused by deliberate acts perpetrated by people. In this spirit, security is principally characterized by intent and causation as opposed to strictly being an output performance indicator reflecting degrees of loss or gain. Sustainability is a more complex attribute and encompasses societal, economic, environmental, resource and technological dimensions.

Other than hard (Technical, Commercial) and soft (Quality and Value) performance criteria, the rest are mainly measured probabilistically in terms of risk or reward due to inherent uncertainties. The overall utility and success of any endeavor essentially amounts to getting the correct balance between these hard and soft performance attributes of the goal being pursued. The optimization of these factors poses a major challenge to the duty holders and decision makers today since it demands understanding and competence in social, behavioral, commercial, legal as well as technical engineering disciplines. In this spirit, systems assurance comprises the portfolio of methods, processes, resources and activities adopted to ensure products, services and systems are designed and operated to deliver a required blend of desired performance measures whilst remaining free from undesirable emergent properties which pose a threat to health, safety and welfare of people, commercial damage to the businesses and harm to the natural habitat.

The tutorial on systems oriented safety, security & sustainability would endeavor to cover the following facets of systemic assurance:

  • Systems specification
  • Requirements Analysis/Specification and Target Setting
  • High Integrity Systems Design
  • Systems Modeling and Simulation
  • Qualitative and Quantitative Systems Safety, Security & Sustainability Assessment
  • Probabilistic Safety and Security Performance Forecasting
  • Systems Risk and Reward Management
  • VIII. Demonstration of Compliance against Standards and Legal Requirements

Bio: Dr. Ali is currently the Director of R&D and Innovation at Vega Systems, UK. He is an expert in the systems assurance and safety, security, sustainability and knowledge assessment/management methodologies and has a background in design and development of advanced control systems for business and safety critical industrial applications.

Ali project managed the safety analysis and assessment of European Rail Traffic Management System’s ETCS for the EU Commission under the ESROG project. He also project managed the development of an advanced and systematic Safety & Risk Management System for EU Commission under SAMRail project, in support of the pan European Railway Safety Directive. He contributed significant original material to CENELEC WGA10 Report TR-50451 on Allocation of Safety Integrity & TR-50506-1 on the Cross-Acceptance of Signalling Systems. He represents UK on CENELEC & IEC safety systems, hardware & software standards committees. He was appointed by CENELEC as convenor of WGA11 for review of EN50128 Software Safety Standard and Convener of RG3 in WG14, where he is responsible for update and restructuring of the software, hardware and system safety standards in CENELEC. Ali also heads the System Safety & Security Technical Committee at IEEE Systems and Cybernetics Society (SMC) whilst chairing the SMC Chapter in the UK&RI Section of IEEE.

During December 2013, Ali was appointed as the Member of the Institution of Engineering & Technology (IET-UK) Council and as the Vice Chair of the IEEE in the UK and the Republic of Ireland. Ali is a Visiting Professor at London City University’s Centre for Systems and Control in the School of Engineering & Mathematics and at Beijing Jiaotong University School of Electronics & Information Engineering. He is also a Fellow of Royal Society of Arts (FRSA), Fellow of the Institution of Engineering & Technology (IET), a Senior Member of IEEE and a member of the Security Institute.


High Efficiency Video Coding

Mukesh Taneja
Mr. Shailesh Ramamurthy

Arris India (the group was formerly Motorola Home), Bangalore, India

Length of the tutorial: Half-day

Scope: High Efficiency Video Coding (HEVC) is the latest video compression standard from ISO/IEC MPEG and ITU-T VCEG, and promises to be a spectacular successor to the H.264/MPEG-4 AVC. It targets to be doubly efficient at compression with respect to H.264/MPEG-4 AVC when benchmarked at the same video quality. It is conducive towards resolutions like 4K and 8K Ultra High Definition Resolutions.

The tutorial would cover the following modules:

  • Introduction to compressing and delivering visual media of the next generation
  • Enabling Techniques: Overview of Tree Block Structures, Intra and inter prediction techniques, Entropy Coding, Motion Compensation, Motion Vector Prediction, Transform Techniques, Deblocking and Sample Adaptive offset filters
  • Parallel Processing Tools
  • Applicability in end-to-end use-cases
  • Scalable coding and 3D extensions

This tutorial will benefit participants from academia and industry interested in understanding HEVC. Theoretical concepts will be linked to end-to-end use-cases to drive home the applicability of various tools and techniques.

Bio: Shailesh Ramamurthy has been a key contributor to the JPEG2000, Scalable Video, H.264 and HEVC programs at Motorola (formerly) and Arris (presently). He has actively participated in JPEG2000 Standard Committee meetings.  He has been working in the area of image, video and signal processing for the last eighteen years, focusing on algorithmic, architectural and implementation aspects, with the last sixteen years being at Motorola and Arris. His areas of interest include image and video compression for embedded and mobile applications, scalability in image and video coding, H.264, HEVC and audio synthesis. Shailesh was awarded the Dr. Shankar Dayal Sharma Gold medal for his M.Tech from the Indian Institute of Technology, Kharagpur, and received his B.E. from VJTI, Bombay.


Network Security and Beyond: Network Anomaly Detection in the Field

Christian Callegari
Dr. Christian Callegari

Dept. of Information Engineering, University of Pisa, Italy

Length of the tutorial: Half-day


This tutorial provides an overview of the most relevant approaches to network anomaly detection, as well as of the main challenges in applying anomaly detection to “real world” scenarios. The tutorial is structured into three main parts: in the first one, starting from the seminal work by Denning, the basic concepts about anomaly detection will be introduced. Then, in the second part, some of the most recent and relevant works about statistical anomaly detection will be discussed. For each of the presented methods, the description of the theoretical background, focusing on why the method should be effective in detecting network anomalies and attacks, will be accompanied by a discussion on the anomalies that can be detected and on the achievable results, as well as on the main limitations of the method. Finally, the third part of the tutorial will focus on the challenges that arise when applying Anomaly Detection in the field, e.g., how to deal with huge quantities of data or with the privacy concerns typical of highly distributed scenarios.

Outline of the presentation

I. Introduction and Motivation (10 min)

II. Basics of Statistical Intrusion Detection Systems (20 min)

General Concepts about Anomaly Detection

IDES Intrusion Detection Expert System: the use of a statistical approach to detect anomalies in the network traffic was first introduced by Denning. The author proposed an early, abstract model of an Intrusion Detection Expert System (IDES), based on the statistical characterization of the behavior of a subject with respect to a given object.

III. Statistical approaches for anomaly detection (90 min)

Snort: does the most famous IDS perform anomaly detection? on which basis and to which extent?

Clustering: clustering is a well-known technique, usually applied to classification problems. In the context of anomaly detection, two distinct approaches have been developed, which will be both discussed.

Heavy Hitters and Heavy Changes: monitoring the changes in the distribution of the heavy hitters of the network traffic can be used to detect DoS/DDoS attacks, as well as other distributed attacks (e.g., Botnets).

CUSUM: cusum based approaches, which aim at detecting abrupt changes in the time series given by the temporal evolution of several traffic parameters (e.g. number of received bytes), can be used to detect anomalies in the network traffic.

PCA: principal component analysis is effectively used to tackle the problem of high dimensional datasets, which usually affects network monitoring systems. In this field, PCA is often used as a detection scheme, applied to reduce the dimensionality of the audit data and to detect anomalies, by means of a classifier that is a function of the principal components. In spite of being one of the most widely applied tools for Anomaly detection (also in commercial products), it presents many limitations.

IV. Anomaly Detection in the Field (90min)

- Dealing with traffic seasonality: seasonality of the traffic poses several problems in the application of most of the anomaly detection techniques. Some of the most classical approaches (e.g., Wavelet analysis) to pre-filter such seasonal components will be discussed, highlighting the improvements (and the drawbacks) introduced in the system.

- Dealing with huge quantities of data: the explosive growth of the traffic poses several problems when applying techniques that need to process the whole traffic. We will discuss pros and cons of several data mining techniques (e.g., Sketch and Reversible Sketch) that permits to analyze a data flow, almost in real-time, without storing all the data.

- Dealing with distributed environment: highly distributed, multi-domains environments pose several constraints to the application of any traffic monitoring techniques (e.g., privacy concerns). We will discuss how to deal with them, so as to respect the legislation, still being able to effectively perform anomaly detection.

IV. Discussion and perspectives (30min)

Intended Audience: This tutorial is addressed to all researchers and practitioners working in the field of networking, who can be interested in detecting an anomalous behavior in the network, and in particular to those dealing with intrusion detection systems, anomaly detection, DoS/DDoS attack detection. In addition to this, the tutorial may be of interest to all those people also dealing with statistical approaches for traffic monitoring and classification.

Since all the theoretical notions necessary to understand the covered topics will be provided in the tutorial, no particular knowledge is required for attendees, except for some basics of networking (IP/TCP architecture).

Bio: Christian CALLEGARI received the B.E. and the M.E. degrees in telecommunications engineering and the PhD degree in information engineering from the University of Pisa, in 2002, 2004, and 2008, respectively. Since 2005, he has been with the Dept. of Information Engineering at the University of Pisa, where he is currently a postdoc research fellow. In 2006/07, he was a visiting student research collaborator at the Dept. of Computer Science at ENST Bretagne, France and in 2013 he was a visiting researcher at Eurecom, SophiaAntipolis, France. He has given several PhD courses about anomaly detection, network security, and statistical traffic classification (both at national and international level) and he has also given several tutorials about anomaly detection in leading international conferences.

His research interests are in the area of network security and monitoring. He has participated to several research projects related to the Anomaly Detection topic, both at national (e.g., PRIN RECIPE) and European level (FP7 STREP PRISM, FP7 IP DEMONS, NGI/NFI Networks of Excellence, and the COST TMA action). Moreover he has been technical coordinator of several regional and local projects related to network security and monitoring.

Christian Callegari has coauthored more than 70 journal and conference papers and he is editor of the book “Data Traffic Monitoring and Analysis: From Measurement, Classification, and Anomaly Detection to Quality of Experience” (LNCS 7754, Springer, 2013). He is the general chair of the international workshop on traffic analysis and classification (TRAC) and the TPC co-chair of several conferences and tracks in leading international conferences. Moreover he is member of the editorial board of several international journals (e.g., International Journal of Trust Management in Computing and Communications) and serves as a TPC member for several international conferences (e.g., IEEE Globecom and ICC) and as a reviewer for several journals (e.g., IEEE/ACM Transactions on Networking, IEEE Communication surveys and tutorials, Wiley Security and Communication Networks, Elsevier Computer Networks Journal) and conferences.


Web Application Security

Manu Zacharia
Manu Zacharia

C|EH, C|HFI, CCNA, MCP, Certified ISO 27001-2005 Lead Auditor, MVP-Enterprise Security(2009-2012), ISLA-2010 (ISC)2

Length of the tutorial: 3 Hours


  • Intro to Web Application Security
  • Web Application Architecture
  • Web Application Security Testing / Penetration Testing
  • OWASP Top 10 vulnerabilities
  • Injection Attacks
  • Cross-Site Scripting (XSS)
  • Broken Authentication and Session Management
  • Insecure Direct Object References
  • Cross-Site Request Forgery (CSRF)
  • Security Misconfiguration
  • Insecure Cryptographic Storage
  • Failure to Restrict URL Access
  • Insufficient Transport Layer Protection
  • Un-validated Redirects and Forwards
  • Incident management
  • Log analysis

Buffer Topics

  • Other Vulnerabilities
  • File upload Vulnerabilities
  • Shells
  • Web Application Denial-of-Service (DoS) Attack
  • Buffer Overflow


  • Information Security evangelist with more than twenty years of professional experience.
  • Awarded the prestigious Microsoft Most Valuable Professional - MVP award consecutively for
  • four years (2009, 2010, 2011 and 2012) in Enterprise Security stream.
  • Also honored with the prestigious Asia Pacific Information Security Leadership Achievements
  • Award for 2010 from (ISC)² under Senior Information Security Professional Category.
  • Awarded the Nullcon Black Shield Awards for 2014 under the Community Star category for
  • contribution to community in terms of knowledge sharing, administration, communication, proliferation
  • Recipient of Newsmakers Achievers Awards in IT Sector for the Best Ethical Hacker in 2011
  • Founder of c0c0n International Hacking & Information Security Conference and also Information
  • Security Day Initiatives.
  • Co-Founder – Ground Zero – Asia’s Foremost Information Security Conference.
  • Creator & Chief Architect of Matriux - Security & Penetration Testing Operating System
  • Associated with International Multilateral Partnership Against Cyber Threats (IMPACT) – the cyber
  • security executing arm of the United Nations' specialized agency - the International Telecommunication Union (ITU) as Expert Trainer.
  • Director – Indian Infosec Consortium –
  • Member - Technology Steering Committee, National Security Database - an Initiative by ISAC
  • Foundation and Govt of India for National Critical Infrastructure protection and Cyber Safety
  • Enlisted with Prometric ( - global leader in technology-enabled testing and
  • assessment services) as their Subject Matter Expert (SME) for Cyber Security
  • Associated with the Signal School, Centre for Defense Communication & Electronic Warfare -
  • premier professional training institution of the Indian Navy in Communications and Information Warfare for their various cyber security courses.
  • Subject Matter Expert for The Information Assurance and Homeland Security Academy
  • Co-authored a book on Intrusion Detection Systems.
  • Also associated with Southern Command, Indian Army and Criminal Investigation Department
  • (CID), Maharashtra Police for their Cyber Security training through C-DAC, ACTS.
  • Founder & Producer of Right Click – A TV Tech Show aired in over 60 countries by Asianet News.
  • Speaker @ various International and national security and technology conferences including
  • Cyber Security Summit 2012, Kuala Lumpur, Malaysia, Qualys Security Conference 2011 (Keynote Speaker), Microsoft Tech-Ed (2010 and 2011), ClubHack, Enterprise Information Security 2010 – Singapore, Bangalore Cyber Security Summit, Security Conference- Bangalore 2010, DevCon, Microsoft Virtual Techdays, Nullcon 2011, etc.
  • Also an expert member of the Curriculum Review Committee of the Indira Gandhi National Open
  • University M-Tech Programme in Information Systems Security.
  • Associated with the Centre for Development of Advanced Computing (C-DAC - the R&D institution
  • and scientific society of the Ministry of Communication & Information Technology, Government of India), as a Guest Faculty for their various Information Security modules.
  • President of Information Security Research Association (ISRA) and an active member of Data
  • Security Council of India, Bangalore Chapter.
  • Closely associated with the academia on various projects and also an invited speaker to various
  • colleges like IIIT, Allahabad, IMT Ghaziabad and SCIT.
  • Visiting faculty to Gujarat Technical University (M-Tech Program in IT Sys & Network Security)