Deep Learning World Munich 2019
May 6-7, 2019 – Holiday Inn Munich – City Centre
Deep Learning World - Munich - Day 1 - Monday, 6th May 2019
When attempting to solve AI based problems we often face barriers to entry. Data scarcity, data sparsity, class imbalance, and on-premise data storage requirements are just a few. Recent advancements in the world of deep learning have brought new solutions when dealing with practical challenges with noisy data, data augmentation and generation, visualization, and acceleration. In this talk David Austin will give a broad overview to recent advancements that can result brining AI based solutions to market.
Processing 3D images has many use cases. For example, to improve autonomous car driving, to enable digital conversions of old factory buildings, to enable augmented reality solutions for medical surgeries, etc. The size of the point cloud, the number of points, sparse and irregular point cloud, and the adverse impact of the light reflections, (partial) occlusions, etc., make it difficult for engineers to process point clouds. 3D Point Cloud image processing is increasingly used to solve Industry 4.0 use cases to help architects, builders and product managers. I will share some of the innovations that are helping the progress of 3D point cloud processing. I will share the practical implementation issues we faced while developing deep learning models to make sense of 3D Point Clouds.
The frameworks for deep learning provides the critical blocks for designing, training and validating deep networks. They offer a high level of abstraction with the flexible and simple programming interface to diverse community of deep learning practitioners. Undoubtedly, Tensorflow is emerged as the industry's most popular framework adopted by many software giants due to its highly scalable and flexible system architecture. When its comes to easy programming interface and debugging, it is still a pain due to its statistic graph's construction approach. The tools like tf Debugger and Tensorboard are especially introduced to make user's life simple to debug and visualize the state of learning. In this deep dive we will try to understand deep learning concepts applied in computer vision and image orocessing.When it comes to the myriad of applications with images, these tools helps the user in understanding the state of a learning algorithm. We will examine a simple deep learning based image recognition program, breaks it and learn to fix it effectively with the help of tools like tf debugger and Tensorboard. We will then try to understand the sophisticated concepts of tuning the deep network by visualizing the effects on Tensorboard.We will also explore the much more complex project like image segmentation for real world data and image domain transfer with adversarial networks with Tensorboard. We will learn how these handy tools can really make a big difference in deciphering the complexities of the algorithm and remove the "Dont know what to do next ! " situation in DL practitioner's life.
In 2018 we were asked to develop a software, that is able judge the legal validity of certain paragraphs within a contract. After exposing lawyers to an early prototype, we realised that we needed to give detailed explanations about how the neural net derives its decision. As a consequence, we decided to split up the problem into different modules which helped much in creating a better transparency. Now that the software is in production we would like to share our learnings and also discuss the quality of the predictions.
From the start Deep Learning World has been the place to discuss and share our common problems. These are your people – they understand your situation. Often rated the best session of all, sharing your problems with like-minded professionals is your path to answers and a stronger professional network.
Chose your most burning topic and discuss with your colleagues:
- Beyond Image Recognition: What are the Deep Learning Use Cases with Real Business Impact?
- Jobs below the API: Can we automate with workers' dignity in mind?
Nowadays, there are various trends on busting training and execution of deep neural networks using hardware acceleration. Thus, it is worthwhile to take a deeper look at using FPGA to burst the execution of DNNs to new spheres. Using pre-trained models and adapt them via transfer learning to specific problems pushes FPGA at least for execution of DNNs in a different league. This talk will show how to use FPGAs in specific scenarios for quality checks in factories or for security surveillance. This talk will provide some more insights in how to use FPGAs and accelerate the execution of AI model
Deep Learning World - Munich - Day 2 - Tuesday, 7th May 2019
Image recognition is an essential part of autonomous driving technology. Cars have to recognise a multitude of items in a front camera scene. Deep learning networks are the state-of-the-art modelling approach to take decisions based on the camera image feed. Unfortunately, research has shown that universal perturbations on this image feed can be designed such as to corrupt the networks’ decisions. This fact has strong implications on the security and safety of autonomous cars today. This deep dive explains how perturbations work and if and how they can be detected in the data. The talk includes a live demonstration where a perturbation is constructed from data and applied to a self-driving car’s street scene.
With the growing amount of products and information, with significant rise number of users, it becomes increasingly important for companies to search, map and provide them with the relevant chunk of information or products according to their preferences and tastes. Let's talk about deep learning approach in recommender systems, which is gaining more and more popularity, their advantages and disadvantages, and the specific scenarios in which they are most effective.
Finding anomalies is only one part of a comprehensive repair/maintenance solution. Not all anomalies are problems and not all problems need to be fixed. Understanding the context behind a sensor reading — where anomalies come from, what they mean, and what needs to be done about them — requires extracting the intent-data from other sources, be they historical service orders, OEM manuals, or human heuristics. This gives symptom and resolution information that guides the process from complaint to correction. This presentation covers our AI models that enable intent discovery in one of the noisiest domains: Automotive. Learn how we made sense of data coming from 70,000+ different vehicle makes and models.
Recommendation Systems (RecSys) are now fully integrated in daily users experience, helping to discover content on digital platforms. Jonathan Greve and Sébastien Foucaud will compare the RecSys implemented at XING and heycar, based on classical machine-learning and ensemble for the former and deep neural networks and embedding for the latter. They will in particular highlight the differences in business impact and propose future developments joining these approaches.
Embeddings have become a powerful tool for representing discrete entities as continuous vectors. Recent advances have extended the original text embedding framework to accommodate new types of data. Of particular interest is a novel algorithm called “node2vec” that learns embeddings for nodes in a network. This is an especially pertinent use case for our team, since WeWork’s member community can conveniently be expressed in graphical form. In this talk, we’ll discuss how we use “node2vec” to create rich feature representations of WeWork communities, and then build recommendation services that are powered by these trained models.
Human languages are complex, diverse and riddled with exceptions – translating between different languages is therefore a highly challenging technical problem. Deep learning approaches have proved powerful in modelling the intricacies of language, and have surpassed all statistics-based methods for automated translation. This session begins with an introduction to the problem of machine translation and discusses the two dominant neural architectures for solving it – recurrent neural networks and transformers. A practical overview of the workflow involved in training, optimising and adapting a competitive neural machine translation system is provided. Attendees will gain an understanding of the internal workings and capabilities of state-of-the-art systems for automatic translation, as well as an appreciation of the key challenges and open problems in the field.
A delayed reward machine learning (ML) algorithm can be used to learn manufacturing techniques for error correction and whole part/process variability minimization to improve yield. We have designed and validated such a system for 3D printing and will show error detection, classification and changes applied to the part to correct for the error, such that the overall quality (as defined by part tensile strength) is recovered after the introduction of the error. Such a system can be applied to other manufacturing tasks and for medical 3D printing.
Healthcare is emerging as a prominent area for deep learning applications which promise to improve the life quality of millions of patients worldwide. In such a regulated industry, however, innovators aiming to seize this chance face however one major issue: achieving regulatory compliance. With a real case study, this talk will guide the audience through the current American / European regulatory framework for medical devices and provide a step-by-step guide to market for deep learning applications, highlighting the main challenges and pitfalls to avoid as well as the key issues a company needs to consider to succeed in this endeavor.
A machine learning solution is only as good as it is deemed by the end-user. More often than not, we do not think through how results are communicated or measured. If we want users to trust and correctly interpret AI models, we need to make our models transparent and understandable. In this case-study we will discuss the platform we developed for deep learning on medical images. Two example projects: “Cell detection in bone marrow” and “Analysis of colon tissue” will be discussed to illustrate how UX affects end-users perception of AI.