Deep Learning World Munich 2019

May 6-7, 2019 – Holiday Inn Munich – City Centre


First confirmed sessions

Jonathan Greve
Jonathan Greve

Machine Learning Engineer

Dr. Sébastien Foucaud
Dr. Sébastien Foucaud

Vice-President Data Science

Deep Cars: Neural Networks and Recommenders in the Online Marketplace at Heycar

A good merchant knows what their customers want, but in the world of online classifieds and e-commerce do we really know our customers? How can we tell who is serious and who is simply passing by? This talk will discuss the approach taken at Heycar for building relevant recommender systems to drive improvements on leads, page views, customer retention and much more. We examine the evolution from implicit matrix factorisation approaches in collaborative filtering to understanding deep neural networks for predicting both hybrid user-item and item-item recommendations and bringing those models into production. We will also examine insights gained from the use of deep neural networks for recommending.

Muzahid Hussain
Muzahid Hussain

Research Engineer

Debugging and Visualizing TensorFlow Programs with Images

The frameworks for deep learning provides the critical blocks for designing, training and validating deep networks. They offer a high level of abstraction with the flexible and simple programming interface to diverse community of deep learning practitioners. Undoubtedly, Tensorflow is emerged as the industry’s most popular framework adopted by many software giants due to its highly scalable and flexible system architecture. When its comes to easy programming interface and debugging, it is still a pain due to its statistic graph’s construction approach. The tools like tf Debugger and Tensorboard are especially introduced to make user’s life simple to debug and visualize the state of learning. In this deep dive we will try to understand deep learning concepts applied in computer vision and image orocessing.When it comes to the myriad of applications with images, these tools helps the user in understanding the state of a learning algorithm.
We will examine a simple deep learning based image recognition program, breaks it and learn to fix it effectively with the help of tools like tf debugger and Tensorboard. We will then try to understand the sophisticated concepts of tuning the deep network by visualizing the effects on Tensorboard.We will also explore the much more complex project like image segmentation for real world data and image domain transfer with adversarial networks with Tensorboard. We will learn how these handy tools can really make a big difference in deciphering the complexities of the algorithm and remove the “Dont know what to do next ! ” situation in DL practitioner’s life.

Automated Assessment of Legal Contracts: Lessons Learned From Putting a RNN Into Production

In 2018 we were asked to develop a software, that is able judge the legal validity of certain paragraphs within a contract. After exposing lawyers to an early prototype, we realized that we needed to give detailed explanations about how the neural net derives its decision. As a consequence, we decided to split up the problem into different modules which helped much in creating a better transparency. Now that the software is in production we would like to share our learnings and also discuss the quality of the predictions.

Dr. Christian Spindler
Dr. Christian Spindler

Manager / Data Scientist

Universal Perturbations of Computer Vision Systems

Image recognition is an essential part of autonomous driving technology. Cars have to recognize a multitude of items in a front camera scene. Deep learning networks are the state-of-the-art modelling approach to take decisions based on the camera image feed. Unfortunately, research has shown that universal perturbations on this image feed can be designed such as to corrupt the networks’ decisions. This fact has strong implications on the security and safety of autonomous cars today. This deep dive explains how perturbations work and if and how they can be detected in the data. The talk includes a live demonstration where a perturbation is constructed from data and applied to a self-driving car’s street scene.

AI on the Edge – How to use FPGA to accelerate your AI?

Nowadays, there are various trends on busting training and execution of deep neural networks using hardware acceleration. Thus, it is worthwhile to take a deeper look at using FPGA to burst the execution of DNNs to new spheres. Using pre-trained models and adapt them via transfer learning to specific problems pushes FPGA at least for execution of DNNs in a different league. This talk will show how to use FPGAs in specific scenarios for quality checks in factories or for security surveillance.
This talk will provide some more insights in how to use FPGAs and accelerate the execution of AI model.

SK Reddy
SK Reddy

Chief Product Officer AI

3D Point Clouds for Autonomous Connected Ecosystem

Processing 3D images has many use cases. For example, to improve autonomous car driving, to enable digital conversions of old factory buildings, to enable augmented reality solutions for medical surgeries, etc. The size of the point cloud, the number of points, sparse and irregular point cloud, and the adverse impact of the light reflections, (partial) occlusions, etc., make it difficult for engineers to process point clouds.
3D Point Cloud image processing is increasingly used to solve Industry 4.0 use cases to help architects, builders and product managers. I will share some of the innovations that are helping the progress of 3D point cloud processing. I will share the practical implementation issues we faced while developing deep learning models to make sense of 3D Point Clouds.
This talk will provide some more insights in how to use FPGAs and accelerate the execution of AI model.

We use cookies to provide you with the best possible experience on our website. By utilising our website you agree to the placement of cookies on your device. Find out more