Deep Learning Experiments at RACAI

Abstract: Deep learning is a relatively new sub-domain of machine learning that came into being with the advent of affordable fast computation resources. It encompasses multi-layered, massive neural networks that (could) have billions of parameters and that have been proven to solve difficult problems. One such recent example is Google’s DeepMind AlphaGo deep net which was able to beat a professional Go player on a full board, for the first time. Deep neural nets are nowadays used to do machine translation, autonomous driving and facial recognition and authentication, to name just a few of their applications.
At the Research Institute for Artificial Intelligence `Mihai Draganescu” of the Romanian Academy, deep learning experiments started a few years ago, tackling problems such as speech synthesis and recognition, natural language parsing, POS tagging and, very recently, named entity recognition. The present talk will touch on experiments involving parsing, POS tagging and named entity recognition, detailing neural nets architectures, their design choices, experimentation cycles and performance in terms of widely used measures.