Keynote Speakers

Speaker: Prof. K. R. Rao, Dept. of Electrical Engineering, University of Texas at Arlington, USA

Title of Talk:  High Efficiency Video Coding​
 
Biography: K. R. Rao received the Ph.D degree in electrical engineering from The University of New Mexico, Albuquerque in 1966. He received B.S. E.E from the college of engineering, Guindy, India in 1952. Since 1966, he has been with the University of Texas at Arlington where he is currently a professor of electrical engineering. He, along with two other researchers, introduced the Discrete Cosine Transform (DCT) in 1975 which has since become very popular in digital signal processing. DCT, INTDCT, directional DCT and MDCT (modified DCT) have been adopted in several international video/image/audio coding standards such as JPEG/MPEG/H.26X series and also by SMPTE (VC-1)and by AVS China. He is the co-author of the books "Orthogonal Transforms for Digital Signal Processing" (Springer-Verlag, 1975), also recorded for the blind in Braille by the Royal National Institute for the blind. "Fast Transforms: Analyses and Applications"(Academic Press, 1982), “Discrete Cosine Transform-Algorithms, Advantages, Applications” (Academic Press, 1990). He has edited a benchmark volume, "Discrete Transforms and Their Applications" (Van Nostrand Reinhold, 1985). He has co-edited a benchmark volume, "Teleconferencing" (Van Nostrand Reinhold, 1985). He is co-author of the books, "Techniques and standards for Image/Video/Audio Coding" (Prentice Hall) 1996 “Packet video communications over ATM networks (Prentice Hall) 2000 and "Multimedia communication systems" (Prentice Hall) 2002. He has co-edited a handbook "The transform and data compression handbook," (CRC Press, 2001). Digital video image quality and perceptual coding, (with H.R. Wu)(Taylor and Francis 2006). Introduction to multimedia communications: applications, middleware, networking, (with Z.S. Bojkovic and D.A. Milovanovic), Wiley, (2006). He has also published a book, "Discrete cosine and sine transforms", with V. Britanak and P. Yip (Elsevier 2007). Wireless Multimedia Communications (publisher: Taylor and Francis) Nov. 2008. He has published extensively in refereed journals and has been a consultant to industry, research institutes, law firms and academia. He has reviewed 23 book manuscripts for book publishers. He is a Fellow of the IEEE. He is a member of the Academy of Distinguished Scholars, UTA.
 
Abstract: In the family of video coding standards, HEVC has the promise and potential to replace/supplement all the existing standards (MPEG and H.26x series including H.264/AVC). While the complexity of the HEVC encoder is several times that of the H.264/AVC, the decoder complexity is within the range of the latter. Researchers are exploring about reducing the HEVC encoder complexity . Kim et al have shown that motion estimation (ME) occupies 77-81% of HEVC encoder implementation. Hence the focus has been in reducing the ME complexity. Several researchers have implemented performance comparison of HEVC with other standards such as H.264/AVC , MPEG-4 Part 2 visual, H.262/PEG-2 Video , H.263, and VP9, THOR, DAALA and also with image coding standards such as JPEG, JPEG2000, JPEG-LS, JPEG-XT and JPEG-XR. Several tests have shown that HEVC provides improved compression efficiency up to 50% bit rate reduction for the same subjective video quality compared to H.264/AVC.

Besides addressing all current applications, HEVC is designed and developed to focus on two key issues: increased video resolution - up to 8kx4k – and increased use of parallel processing architecture. Brief description of the HEVC is provided. However for details and implementation, the reader is referred to the JCT-VC documents , overview papers , keynote speeches , tutorials , panel discussions , poster sessions , special issues , test models (TM/HM) , web/ftp site, open source software , software manuals, test sequences, anchor bit streams and the latest books on HEVC . Also researchers are exploring transcoding between HEVC and other standards such as MPEG-2 and H.264. Further extensions to HEVC are scalable video coding (SVC), 3D video/multiview video coding and range extensions which include screen content coding (SCC), bit depths larger than 10 bits and color sampling of 4:2:2 and 4:4:4. SCC in general refers to computer generated objects and screen shots from computer applications (both images and videos) and may require lossless coding. Some of these extensions have been finalized by the end of 2014 (time frame for SCC is late 2016). They also provide fertile ground for R & D. Iguchi et al have already developed a hardware encoder for super hi-vision (SHV) i.e., ultra HDTV at 7680x4320 pixel resolution. Also real-time hardware implementation of HEVC encoder for 1080p HD video has been done. NHK is planning SHV experimental broadcasting in 2016. A 249-Mpixel/s HEVC video decoder chip for 4k Ultra-HD applications has already been developed. Bross et al have shown that real time software decoding of 4K (3840x2160) video with HEVC is feasible on current desktop CPUs using four CPU cores. They also state that encoding 4K video in real time on the other hand is a challenge.


Speaker: Prof. Erol Gelenbe,  Imperial College and Polish Academy of Sciences (Inventor of G-Networks and the Random Neural Network)

 
Title of Talk: Deep Learning with Spiking Random Neural Networks
 
Biography: Erol Gelenbe is a Fellow of IEEE, ACM and IET (UK), and a Professor in the Department of Electrical and Electronic Engineering at Imperial College, London. He has introduced computer and network performance models based on diffusion approximations, and invented the Random Neural Network Model, as well as G-Networks which are analytically solvable queueing models that incorporate control functions such as work removal and load balancing. His other contributions include the concept and prototype for FLEXSIM, an object oriented discrete event simulation approach for flexible manufacturing systems, and other commercially successful projects such as the QNAP tool for the Performance Evaluation of Computer Systems and Networks. His innovative designs include the first voice-packet switch SYCOMORE, the first fibre optics random access network XANTHOS, and the first implemented Cognitive Packet Network and its adaptive routing protocol. He also designed and published the first optimal protocol for random access communications, and an optimum check-pointing scheme for databases. For his work, he received several prizes from France, the UK, Hungary and Turkey, including the 2010 IET Oliver Lodge Medal, the 2008 ACM SIGMETRICS Life-Time Achievement Award, and the 1996 Grand Prix France Telecom of the French Academy of Sciences.  He was awarded Knight of the Legion of Honour and Officer of the Order of Merit of France, and Grand Officer of the Order of the Star and Commander of Merit of Italy. He is a Fellow of the French National Academy of Engineering, the Royal Academy of Sciences, Arts and Letters of Belgium, the Science Academies of Hungary and Poland, and the Science Academy of Turkey. He  was awarded Honoris Causa doctorates from the Universities of Liege (Belgium), Roma II (Italy) and Bogazici (Turkey). He has graduated over 73 PhD students, and his recent papers appear in the Physical Review, the Communications of the ACM, and several IEEE and ACM Transactions.
 
Abstract: Networks in mammalian brains are mainly of a spiking nature so that the manner in which such networks learn are of great philosophical, scientific and engineering interest. Thus several years ago, we developed the first O(n^3) gradient descent learning algorithm for recurrent networks using the spiking and random behaviour of biological neuronal cells. In this presentation we will details how these dense structures can be exploited in deep learning and how they can achieve significantly better performance than standard models. The presentation will be illustrated with numerous practical examples.

Speaker: Ronald P. Luijten, Data Motion Architect, IBM Research - Zurich

Title of Talk: Objective, innovation and impact of the energy-efficient DOME MicroDataCenter
 
Biography: Ronald P. Luijten, senior IEEE member, is the initiator and senior technical project leader of the IBM DOME microDataCenter project. Ronald’s personal research interests are in datacenter architecture, design and performance (‘Data Motion in Data Centers’). He holds more than 25 issued patents, and has co-organized 7 IEEE conferences. Over the years (32), IBM has awarded Ronald with three outstanding technical achievement awards and a corporate  patent award..
 
Abstract: The DOME MicroDataCenter technology, developed by IBM Zurich Research and ASTRON Netherlands Institute for Radio Astronomy, brings together the embedded and data-center computing technologies resulting in the densest general purpose computing capability with best energy-efficiency. My presentation will cover how this project went from an initial idea and obtaining funding to a small dedicated team building innovative hard and software. I will explain how we used first principles in physics to motivate our decisions, a few practical technical obstacles we needed to overcome and key lessons we learnt along the way. Our result, which we are currently bringing to market thru a startup company, addresses the needs of edge-computing for the Internet of Things, a opportunity that we did not foresee when we started the project 5 years ago. I will close with a technology roadmap..

Speaker: Dr. Mauro Conti, University of Padua, Italy

Title of Talk: Can’t You Hear Me Knocking: Novel Security and Privacy Threats to Mobile Users
 
Biography: Mauro Conti is an Associate Professor at the University of Padua, Italy. He obtained his Ph.D. from Sapienza University of Rome, Italy, in 2009. After his Ph.D., he was a Post-Doc Researcher at Vrije Universiteit Amsterdam, The Netherlands. In 2011 he joined as Assistant Professor the University of Padua, where he became Associate Professor in 2015. In 2017, he obtained the national habilitation as Full Professor for Computer Science and Computer Engineering. He has been Visiting Researcher at GMU (2008, 2016), UCLA (2010), UCI (2012, 2013, 2014, 2017), TU Darmstadt (2013), UF (2015), and FIU (2015, 2016). He has been awarded with a Marie Curie Fellowship (2012) by the European Commission, and with a Fellowship by the German DAAD (2013). His main research interest is in the area of security and privacy. In this area, he published more than 200 papers in topmost international peer-reviewed journals and conference. He is Associate Editor for several journals, including IEEE Communications Surveys & Tutorials and IEEE Transactions on Information Forensics and Security. He was Program Chair for TRUST 2015, ICISS 2016, WiSec 2017, and General Chair for SecureComm 2012 and ACM SACMAT 2013. He is Senior Member of the IEEE.
 
Abstract:  While Smartphone and IoT devices usage become more and more pervasive, people start also asking to which extent such devices can be maliciously exploited as “tracking devices”. The concern is not only related to an adversary taking physical or remote control of the device, but also to what a passive adversary without the above capabilities can observe from the device communications. Work in this latter direction aimed, for example, at inferring the apps a user has installed on his device, or identifying the presence of a specific user within a network. In this talk, we discuss threats coming from contextual information and to which extent it is feasible, for example, to identify the specific actions that a user is doing on mobile apps, by eavesdropping their encrypted network traffic. We will also discuss the possibility of building covert and side channels leveraging energy consumption and audio signals...

Speaker: Dr. Aditi Majumder, Professor, Department of Computer Science, University of California, Irvine, USA

Title of Talk: Ubiquitous Displays: Spatially Augmenting Reality Via Multiple Projector-Agents
 
Biography: Aditi Majumder is Professor at the Department of Computer Science in University of California, Irvine. She received her PhD from Department of Computer Science, University of North Carolina at Chapel Hill in 2003. She received her bachelors in Computer Science and Engineering from Jadavpur University, Kolkata. Her research resides at the junction of computer graphics, vision, visualization and human-computer interaction – focusing on exploring new degrees of freedom in VR/AR displays and devices. She has more than 50 publications in top venues like ACM Siggraph, Eurographics, IEEE Visweek, IEEE Virtual Reality (VR), IEEE Computer Vision and Pattern Recognition (CVPR) including best paper awards in IEEE Visweek, IEEE VR and IEEE PROCAMS. She is the co-author of the book "Practical Multi-Projector Display Design". She has served as the program or general chair and program committee is several top venues including IEEE Virtual Reality (VR), ACM Virtual Reality Software and Technology (VRST), Eurographics and IEEE Workshop on Projector Systems. She has served as Associate Editor in Computer and Graphics and IEEE Computer Graphics and Applications. She has played a key role in developing the first curved screen multi-projector display being marketed by NEC/Alienware currently and was an advisor at Disney Imagineering for advances in their projection based theme park rides. She has received the Faculty Research Incentive Award in 2009 and Faculty Research Midcareer Award in 2011 in the School of Information and Computer Science in UCI. She is the recipient of the NSF CAREER award in 2009 for Ubiquitous Displays Via a Distributed Framework. She was a Givens Associate and a student fellow at Argonne National Labs from 2001-2003, Link Foundation Fellow from 2002-2003, and is currently a Senior Member of IEEE. She has recently launched her entrepreneurial venture via the startup Summit Technology Laboratory.
 
Abstract:  The state of the art VR/AR devices provide access to completely virtual environments or augment information on the existing world seen by the user. Wearable devices encumber the users, and sometimes even cut them off completely from the real world. Users interact with digital avatars instead of the real people themselves. Spatially augmented reality (SAR) instead focuses on illuminating physical spaces with one or more projectors which when observed by one or more cameras can enable unique interaction modalities between the physical space and multiple users without being encumbered by any wearable device. This talk will present a new SAR paradigm of ubiquitous displays where displays are not mere carriers of information, but active members of the workspace interacting with data, user, environment and other displays. The goal is to integrate such active displays seamlessly with the environment making them ubiquitous to multiple users and data. This talk will present the overview of decade long research at UCI that has been instrumental in making ubiquitous displays accessible and affordable. Such ubiquitous displays are already gaining traction as projected augmented reality and promise to be a critical component of the future collaborative workspace...

Speaker: Dr. Aditya Murthy, Centre for Neuroscience, Indian Institute of Science, Bengaluru, India

Title of Talk: Computational Mechanisms Underlying the Control of Simple and Complex Movements
 
Biography: Aditya Murthy's undergraduate training was at St. Xavier's college, Mumbai and Bombay University where he obtained my Masters degree. His doctoral training was with Dr. Allen Humphrey in the Department of Neurobiology at the University of Pittsburgh where he examined the neural mechanisms involved in the processing of motion in the visual system. For his postdoctoral training, he worked with Dr. Jeffrey Schall at Vanderbilt University studying the primate visuomotor system to more directly relate neural activity to psychological functions and behaviour. He was a faculty at the National Brain Research Centre, Manesar prior to joining the Centre for Neuroscience, Indian Institute of Science, Bengaluru. Currently, he is the head at the Centre for Neuroscience, IISc.
 
Research Interests: Brain Mechanisms of Motor Control
The brain is the most complex information processing systems known to man and considerable neural machinery is devoted to making visuo-motor tasks such as reaching and grasping seem effortless. Drawing from research in robotics many steps are likely to be involved while planning and executing movements. Some of these stages are decision-making or target selection, coordinate transformations, planning kinematics and dynamics, error correction and performance monitoring. While movements in robots can be superior to naturally occurring movements in terms of speed and accuracy, they are still relatively primitive when it comes to mimicking natural behaviours that occur in unpredictable and unstructured environments. Our lab studies the neural and computational basis of movement planning and control with an emphasis to understand the basis of flexibility and control that is the hallmark of intelligent action. From the perspective of behaviour we seek to understand the nature of computations that enable motor control; from the perspective of the brain we seek to understand the contribution of circumscribed neural circuits to motor behaviour; and by recording the electrical activity of neurons and muscles we seek to understand how such computational processes are implemented by the brain. Our research interests span the fields of visual perception, decision-making, and the generation of motor behaviour and involve the application of cognitive/psychophysical, neuropsychological and electrophysiological techniques. We anticipate that in the long term this work will be useful to understand the basis of different motor disorders and develop brain machine interface systems that are only beginning to be exploited as engineering and brain sciences are starting to increasingly interface.

 
Abstract: A fundamental computation that our brains must perform is the conversion of a stimulus into a motor act. This operation implicitly requires decision-making and motor planning. Using fast eye movements called saccades that rapidly direct our gaze to points of interest in the visual scene we investigate the computational architecture underlying flexible motor planning and control. Using the insights from gained from these experiments we will describe results from recent experiments that provide insights into how the brain might coordinate and control simultaneous eye and hand movements.

Speaker: Dr. V. Srinivasa Chakravarthy, Indian Institute of Technology Madras, Chennai. India

Title of Talk: Understanding the Parkinsonian Brain through Computational Modeling
 
Biography: V Srinivasa Chakravarthy obtained his PhD degree from the University of Texas at Austin and received postdoctoral training from Baylor College of Medicine, Houston. He is currently a professor in the Department of Biotechnology, at Indian Institute of Technology Madras, India. His research interests are in the areas of computational neuroscience, computational cardiology and machine learning. In computational neuroscience, his interests in modelling of basal ganglia to understand Parkinson’s Disease, modelling neuron-astrocyte-vascular networks and modelling spatial cells of hippocampus. He has written a book titled “Demystifying the Brain” which presents the contemporary computational perspective of the brain to the lay reader using a minimum of equations.
 
Abstract: We present a model of Basal Ganglia (BG) that departs from the classical Go/NoGo picture of the function of its key pathways – the direct and indirect pathways (DP & IP). Between the Go and NoGo regimes, we posit a third Explore regime, which denotes random exploration of action alternatives. Striatal dopamine is assumed to switch between DP and IP activation. The IP is modeled as a loop of the Subthalamic Nucleus (STN) and the Globus Pallidus externa (GPe).  Simulations reveal that while the model displays Go and NoGo regimes for extreme values of dopamine, at intermediate values of dopamine, it exhibits exploratory behavior, which originates from the chaotic activity of the STN-GPe loop. We describe a series of BG models based on Go/Explore/NoGo approach, to explain the role of BG in three cases: 1) reaching and 2) gait impairment and 3) willed action.

Speaker: Dr. Bipin Nair, Professor and Dean, Amrita School of Biotechnology, Amrita University, Kollam, India

Title of Talk: Cost-Effective Device and Cloud Enabled Smart Solutions for Diabetes Care
 
Biography:

Bipin Nair received his Ph.D. in Microbiology in 1986 from the Department of Microbiology,  M.S. University of Baroda, India and received his post-doctoral training at the University of Tennessee, Memphis, USA, in the Dept. of Pharmacology from 1987-1992.  His major contributions during that phase were in the areas of growth factor receptor signalling, GTP binding proteins and second messenger pathways.  Dr. Nair then moved to the Biotechnology industry in 1993 and held the position of Senior Scientist- Lead Discovery at MDS Pharma Services in Seattle, Washington, USA.  His experience with High Throughput Screening and application of novel technologies to a wide range of target areas, resulted in many significant achievements for MDS, during his tenure as Research Manager--Lead Discovery, at MDS Pharma Services.   

In Dec. 2004,   Dr. Nair moved back to India and took over as Professor and Chairman of the Centre for Biotechnology, Amrita Vishwa Vidyapeetham, Amritapuri Campus.  Under his leadership, the School of Biotechnology has been a trail-blazer in the Biotechnology arena for both undergraduate and postgraduate academic programs as well as providing active research opportunities for a large number of students pursuing their Ph.D program at the School.   Dr. Nair is also the Coordinator of the TIFAC Centre of Relevance and Excellence in Biomedical Technology at Amrita University, under the Mission Reach program of the Department of Science and Technology, Govt. of India.   Apart from setting up the State of the Art Amrita Biomedical Engineering (AMBE) Research Centre as part of the TIFAC CORE, Dr. Nair also led the group that developed a prototype of an Automated Insulin Pump, which resulted in Amrita University’s first patent from the USPTO.  Subsequently a number of patent applications (both India and USA) in the area of biomedical devices and diagnostics have been filed.   With active national and international collaborations, Dr. Nair’s laboratory also has well-funded (DST, DBT, CSIR, MHRD) research initiatives in Natural Products Lead Discovery with a focus on wound healing and cancer.   Dr. Nair has numerous publications in national and international scientific journals.  He is also the Discipline wise National Coordinator for the development of Biotechnology Virtual Labs being developed under the MHRD –NMEICT program. Another significant recent (April 2014) feather in Dr. Nair’s cap has been the selection of the Amrita School of Biotechnology by the Bill and Melinda Gates Foundation for the Gates Foundation-DBT-BIRAC Grand Challenge India Sanitation Award. Dr. Nair is an Associate Editor for the International Journal ‘Current Pharmacogenomics and Personalized Medicine’ and also serves on several national and international advisory committees.

Abstract: Diabetes Mellitus is a public health problem affecting 65 million people in India with another 21 million people probably being in the prediabetic stage.  The healthcare infrastructure for diabetes care in India, especially for the middle and low income population is extremely fragmented.  With accessibility, availability and affordability being the issues of optimal healthcare today, there is an acute need for proactive low cost intelligent systems for diabetes care.  The present smart solution being developed involves a totally indigenous, automated insulin pump that is a dual micro-controller based compact programmable drug infusion system.  This US patented cost-effective system could be highly effective for administering insulin therapy, which is the gold standard for diabetes care.  The smart solution also comprises a unique and cost-effective, US patented non-enzymatic glucose monitoring system which involves a glucometer, connected devices, mobility, cloud and helpline based ecosystem. In a mutually beneficial collaboration with Wipro Technologies, Amrita University aims to develop this smart solution with the goal to provide low cost quality care especially for the economically challenged diabetic population across the globe.

Speaker: Dr. Shyam Diwakar, Amrita University, Kollam, India

Title of Talk: Computational Neuroscience of cerebellum and interconnected circuits
 
Biography: Shyam Diwakar is the Lab Director of Computational Neuroscience and Neurophysiology Laboratory, a Faculty fellow at the Amrita Center for International Programs and an Associate Professor at Amrita School of Biotechnology, Amrita University. He was awarded the Sir Visvesvaraya Young Faculty Research Fellowship  by Department of Electronics and Information Technology, Govt. of India in April 2016 and the NVIDIA Innovation award in December 2015. He also serves as a executive committee member for Indian Academy of Neurosciences since October 2014. He holds a Ph. D. degree in Computational Sciences from University of Milan, Italy and had worked on as a Postdoctoral Researcher at the Department of  Physiology, University of Pavia, Italy. He is a Senior Member of IEEE, Faculty Member of Organization for Computational Neurosciences (OCNS) and Life Member of Indian Academy of Neuroscience.  
 
Abstract: Over the last years, cerebellum, a small yet significant region of the brain has been connected for its roles in ataxias, Parkinson's, Alzheimers disorders and several other neurological conditions. Signal coding in the cerebellum happens at the input granular layer comprising of a large percentage of cells. With mathematical modeling as the focus, this talk will elucidate the information transmission and signal recoding properties in the cerebellum. To understand circuit function, we modeled population activity such as local field potentials and fMRI BOLD signals. Local field potentials (LFPs) were reconstructed (Parasuram, 2016, 2017) to test and parameterize the molecular mechanisms of cellular function with network properties. The role of synaptic plasticity in modifying the information capability of this circuit will also be addressed. To conclude, we will also show how abstractions of such brain circuits can help as algorithms in robotic control and deep learning. 

Speaker: Dilip Krishnaswamy, IBM Research, India

Title of Talk:  Searching for Gravitational Waves​

Biography: Dr. Dilip Krishnaswamy is currently working as a senior scientist at IBM Research in Bangalore. His current research interests include  distributed information processing and machine learning, distributed data centers, edge services, distributed resource/energy management,  distributed function virtualization, 5G architecture/systems, smarter planet / IoT / M2M systems, cognitive systems, and blockchain technology. He received the PhD degree in electrical engineering from the University of Illinois at Urbana-Champaign. He has worked as a Platform Architect at Intel, and as a Senior Staff Researcher in the Office of the Chief Scientist at Qualcomm, and has taught at the University of California, Davis. He served as the Associate Editor-in-Chief of IEEE Wireless Communications from 2009-2014.  He in an inventor on 58 granted US patents, and has published 70+ papers with 3 best paper awards.  He is a BTech (electronics and communications engineering) alum of IIT Madras.
 
Abstract: This talk will give an overview of research in progress for the detection of gravitational waves. The talk will address both the  physics and computational aspects related to the problem. Gravitational wave detectors in India and Japan are expected to come on board soon to complement detectors in the US and in Europe. This distributed network of detectors will work jointly to detect and estimate parameters such as event localization or determining the masses and spins of black holes or neutron stars involved in the event. One can expect new distributed algorithms related to signal processing or distributed computing or application of machine learning techniques to be developed in the search for gravitational waves, leading to exciting new research opportunities in this domain.

Speaker: Dr Arun Hampapur, Director, Integration Engineering and Project Office,  IBM Services Platform with Watson, Fellow, IEEE, GTS Labs,  IBM India

Title of Talk: Industrial Applications of Big Data and Cognitive Technologies: From Cognitive Cities, to Cognitive Commerce and Cognitive Clouds​

Biography: Arun leads a team of world wide developers in developing the IBM Services Platform which leverages cognitive technologies for delivering managed hybrid cloud services. Arun’s goal is to make Enterprise IT as consumable as consumer IT (think Apple).   
Prior to joining GTS Labs, Arun was a Director at IBM Research.  He led the world wide Commerce Research effort for IBM’s Research Division. His teams worked on multiple aspects of commerce operations from Marketing, to e-Commerce, to Supply Chain, to B2B Integration and solutions for customer service. These solutions heavily leverage big data analytics, predictive analytics, data mining and optimization technologies and deliver value in specific industry contexts.
From 2009 – Jun 2012, Arun had the role of DE and Director in the Business Analytics and Math Sciences Department of IBM Research.  In this role Arun led the creation of analytic solutions targeted at multiple industries in support of operations optimization, asset optimization, condition based management, cross agency coordination, safety and security.
Prior to this assignment Arun was on assignment to IBM Global Technology Services from 2008 -2009 as the CTO of Physical Security, where he provided technical direction for development of physical security solutions and services with a focus on smart  video surveillance systems.  The video analytics and core technology for smart surveillance was originally developed in IBM Research from 2001 to 2006 under Arun’s leadership.  Dr Hampapur managed the Exploratory Computer Vision Group at IBM Research from 2003- 2006 and provided direction for several research activities including Biometrics, Video Analytics, Retail Checkout Analytics and video indexing.
He has published more than 80 papers on various topics related to asset optimization, video analysis, pattern recognition, searchable video and video surveillance and holds 60 US patents and more than 100 patent applications. 
Dr Hampapur is a member of the IBM Academy of Technology, an IBM Master Inventor. Dr Hampapur obtained his PhD from the University of Michigan in 1995. He is a Fellow of the IEEE (class of 2011).

Speaker: Dr Gautam Bhat, Associate Director (Senior Technical Staff Member), GTS Labs, IBM India 

Title of Talk: Role of DevOps & Agile methodology in devising Cognitive Automation Framework

Biography: Gautam Bhat is a Senior Technical Staff Member and an Associate Director with IBM’s GTS Labs. In this worldwide leadership role  he is responsible for driving hybrid cloud solutions for clients and the development of Cloud offerings and cognitive solutions. Gautam is TOGAF certified and an Open Group Distinguished IT Specialist and leads IBM's IT Specialist Profession for India/South Asia region comprising of about 50,000 IT Specialists.
Gautam is a co-chair for the Open Group's SOA Certification project comprising of several industry representations (participants) from Infosys, TCS, CTS, American Express, Siemens and IEEE . This group has published several white papers since 2013.
Gautam is an IBM Inventor, Member of the IBM Academy of Technology Team and a Business Facilitator for IBM's Technical Leadership program (which was ranked top 10 by LearningElite) and is also a Sales Advisor for IBM's Global Sales School program.
Gautam has several publications, patents and conference presentations to his credit and is a regular speaker @ Enterprise Architecture forum in The Open Group. He is frequently invited to talk on CAMSS at several International conferences such as Systems Engineering conference, OpenSource conferences and like. 

Speaker: Prof. Prabin Kumar Bora, Professor, Department of Electrical and Electronics Engineering, Indian Institute of Technology Guwahati, India

Title of Talk: Illumination Estimation for Image Forensics

Biography: Prof. Prabin Kumar Bora received his M.E. and PhD degrees from Indian Institute of Science, Bangalore, both in Electrical Engineering. Since 1997, he has been associated with the Department of Electrical and Electronics Engineering, Indian Institute of Technology Guwahati where he is a professor now. His area of interest includes image processing and computer vision and the application of signal processing techniques to process physiological and communication signals. He has guided 10 PhD students and over 60 M.Tech. students. He has published more than 120 journal and international conference publications.
Abstract: Illumination has been found to be an effective cue for detecting splicing forgeries in images. In illumination-based image forensics, the illumination information is estimated from different objects present in an image. The estimated illumination features are later compared to check for possible forgeries. The forensics community has been using two different aspects of illumination for forgery detection: the illumination direction and the illumination colour. Hany Farid and his team at Dartmouth College, pioneered the work on illumination direction-based forensics. In their first method, they estimated the 2D illumination direction from the shading and 2D object contour normals and checked the consistencies in the illumination directions. Later, they proposed to use spherical harmonic (SH) analysis to estimate more complex illumination environment in terms of the SH coefficients, making the method more applicable to real-life images. To estimate the full 3D illumination environment, a 3D face model is created from some face images. The 3D surface normals for a test face image are extracted by applying this model and used to estimate the lighting environment. Recently, Peng et al. have proposed a more accurate 3D SH-based method by relaxing some less practical assumptions about human faces. Like the illumination direction, the illumination colour is proved be an effective cue for checking authenticity of images. In the first illumination colour-based method, we showed how illumination colour can be effectively used to expose splicing forgery. In the method, we first created a dichromatic plane (DP) from the specular highlights of each object utilising the dichromatic reflection model (DRM). We showed that for an authentic image, the DPs estimated from different objects intersect at a single point. For a forged image, there will be more than one intersection points. Carvalho et al. from Brazil have proposed a machine learning-based approach, where a new image, called illuminant map (IM), is first created by replacing each homogenous region with its illuminant colour. Then, machine learning-based classifiers are trained to capture the texture, shape and colour inconsistencies present in spliced images. The presentation will outline these techniques.