Deep Learning World Berlin 2018
November 12, 2018 – Estrel Hotel Berlin
Monday, 12th November 2018
A cognitive sensor is supposed to use features such as hearing, seeing or feeling like humans do. So, in this scenario we have used audio classification to detect machine defects - we used a standard microphone as a sensor and trained a model to detect these defects. We adapted the image classification process towards audio processing and the results were quite impressive.
This talk will provide some insights about the approach, how we did the training of the DNN and the results.
In 2015, CNNs first won against humans in the yearly ImageNet challenge on image classification. This marked a turning point in object recognition within visual data. Since then, further major advances has been made on the algorithmic and hardware side. It has never been easier to setup and run a CNN. dida Datenschmiede made use of these technological advancements in various projects. The presentation will provide you with insights into how object recognition projects are planned, what services are used to (technically) set up the project, what are major obstacles and solutions and what are practical applications for clients.
Reinforcement learning is in addition to (un)-supervised learning a major machine learning technology, which has a huge potential in a broad field of applications like robotics, autonomous driving, gaming and general control. This talk describes the major concepts, algorithms and software environments of it and gives a detailed overview of its capabilities. It addresses people with a reasonable background of other AI/ML technologies and therefore requires a good technical background.
The best mathematical representations of dynamical systems are state space models. For the approximation of such systems with recurrent neural networks we have a universal approximation theorem, similar to the universal approximation for feedforward networks. To identify a model in this framework we can rely on data AND additional a priori insights of the class of dynamical systems. After some comments on the learning of recurrent networks we will study open (small) systems. By construction an open system is small, because there exists an environment which influences the system. Opposed to this we will study closed (large) dynamical systems. In principle they are world models, because there are no influences from an outside world. Another class are dynamical systems on manifolds. They allow the description of high dimensional systems – if the real dynamic stays on a low dimensional manifold of the description space. If we apply these insights to forecasting we have to say something on uncertainty too. We will end with a discussion about the differences between causality, determinism and uncertainty. The talk shows the relevance of the theories in real world examples like demand-, load- and commodity price forecasting. For a more detailed description of the above talk see the full day workshop at Nov 15.
Transfer learning is a deep learning technique that uses pre-trained networks as starting points for training domain specific classifiers. This allows for virtually out-of-the-box building of powerful baseline deep learning models for virtually any domain, from medical images like X-ray images to industrial optical images or satellite imagery. This can be further generalized to non-image datasets like IoT, by emphasizing its multichannel 1D images properties. We use GPU enabled Deep Learning Virtual Machines available on Microsoft Azure AI platform to show how engineers can leverage open-source deep learning frameworks like Keras to build end to end intelligent signal classification solutions.
Machine learning thrives on large, well-organized and labeled training data sets. Big Data - large data sets collected in the real world - is often not. These data sets require unsupervised learning approaches that help us to discover the inherent structure in the data, and visualize them. I'll discuss a statistical learning approach based on mixture models and Naive Bayesian classifiers to find clusters in binary feature vectors. By arranging the classifiers topologically one can impose a spatial structure and visualize large data sets in a way similar to self-organized maps. Such maps can help us to understand messy real-world data appearing in many Big Data analyses
The renowned Berlin painter Roman Lipski has been working for two years with his Artificial Muse A.I.R., which inspires and augments him in his artistic work and pushes him to new frontiers. Now we present the latest generation of the muse, that is based on generative networks and allows for an intuitive and fluid interaction between artist and algorithm. Using Conditional Generative Adversarial Networks at its core, cGANs for short, A.I.R. translates sketches directly into new inspirations, facets and images. While the algorithm itself is mathematically complex and not easily accessible to human understanding, Lipski’s new approach to the muse exemplifies how curious discovery and experimentation can lead to intuitive understanding and ultimately trust, in a new generation of tools, in artificial intelligence per se. Explainability by interaction, trust by time. In our talk we will take a deep dive into the technical layer and share the learnings we made at “the inbetweens” of Roman and his Artificial Muse, of human and artificial intelligence. And this is just the beginning...