AI/machine learning

Overcoming Deep Learning Stumbling Blocks

The sixth annual Deep Learning Summit in London saw industry leaders, academics, researchers, and innovative startups present the latest technological advancements and industry application methods in the field of deep learning.

gettyimages-1148389155.jpg

At the sixth annual Deep Learning Summit in London, attendees congregated to hear from industry leaders, academics, researchers, and innovative startups presenting both the latest cross-industry technological advancements and industry application methods. Running in parallel was the AI Assistant Summit and AI in Retail and Advertising Summit, resulting in some of the world's leading experts from universities, brands, and emerging startups coming together.

The day began with Huma Lodhi, data scientist at BP, discussing some of the tricks and tips she has picked up during her work in deep learning, with intelligent methodologies using structured and unstructured data as the focal point. Huma said that these methodologies, while beneficial from the standpoint of customer-service improvement, increased revenue, and improved loss prevention, have been found to hit initial stumbling blocks with industry application:

“We need to find better methods to use this data for our real-world applications," she said. "Examples of this can be noisy data, missing data, or unstructured data. This gives us the principle problem for data: quantity vs. quality.”

Huma then went on to suggest, however, that there is no straightforward or quick fix for these issues because of this variation of industry requirements

“It is important for us not to focus on specific algorithms for specific problems," she said. "These kinds of algorithms usually fail in real-world applications. We should try to take parts of different algorithms and combine them into a real world-applicable solution. This can be in the form of deep learning neural networks.”

Experimentation with data was also a key topic covered in the first deep dive of the day, granting the opportunity for attendees to join a more intimate style of session with regular question intervals. Hado van Hasselt, senior staff research scientist at DeepMind, opened his discussion with the statement that we are currently at an intersection of two fields, citing that the combination of both deep learning and reinforcement learning has brought an air of excitement to DeepMind. Hado commented that the current standing of processes at DeepMind is rapidly developing. He explained that DeepMind first automated certain physical solutions using GO!; once they had learned to do this, the efficiency and accuracy of problem-solving increased.

Once automation became possible, DeepMind took another step forward with a focus on agents learning solutions from experience. That said, the initial data used needed to be of the highest quality because of the sequential nature of learning. Hado stressed that, when models are built on data with mistakes, the intended outcome is not always achieved. Goal-optimized reward systems were given as an example of the need for correctly formulated algorithms from the outset. Hado suggested that DeepMind had to find a goal reward, which was easy enough to specify and deliver, but which also gave us good results. One test completed saw a car driving around a track with the goal incentive increased through quicker lap times. The agent soon found that this was not through distance traveled but simply speed; this found the car on its roof with wheels spinning and earning high rewards without supplying any usable data for DeepMind.

Single reliance on either human behavior or model data was also quashed by Hado, who deems a mixture of both to be a determining factor in the quality of results. AlphaGo trained its policy network by first training on human behavior, which eventually saw it become far more skillful than your average player. However, DeepMind also found that reliance on this model also can see potentially problematic results as the optimal solution for gameplay is rarely that of human decision-making.

Read the full story here.