|09:25-10:15||Practical transfer learning for NLP with spacy and
INES MONTANI, Founder of Explosion Al
|10:45-11:35||Deep understanding text-based models
ANNA WRÓBLEWSKA, PhD
Warsaw University of Technology & Applica.ai
|11:40-12:30||How to learn trusthworthy knowledge graphs? Seven problems
and seven remedies
AGNIESZKA ŁAWRYNOWICZ, PhD
Poznań University of Technology
|13:30-14:10||Simple is not easy - the overview of two
MARIA KNORPS, PhD
IF Research Polska
|14:10-15:00||Data analysis in neuroimaging
HANNA NOWICKA, PhD Candidate
|15:30-16:05||Robustness in machine vision: poking your deep learning
IRINA VIDAL MIGALLÓN
Senior Computer Vision & AI Engineer
Siemens Mobility GmbH
|16:10-16:45||Cognimates: a platform for AI coding and education
STEFANIA DRUGA [live stream]
Transfer learning has been called "NLP's ImageNet moment". Recent work has shown that models can be initialized with detailed, contextualised linguistic knowledge, drawn from huge samples of data. In this talk, I'll explain spaCy's new support for efficient and easy transfer learning.
A data scientist should not only code machine learning models, but also understand their internal mechanisms. How do we deal with problems with text modeling in business applications and research and development projects? I will show several use cases that will include detection of emotions, language abuse and others. I will also show a tool to explain suspicious results in data sets and model results.
Even though the world of data science is fascinated by LSTMs and GANs, there are many commercial projects that do not have to use such heavy artillery to provide value for the client. In my talk, I will present two projects which main purpose was to create a tool for data analysis and visualization. First is an application for constant audit and the second is Twitter explorer. Both are data-based web apps and and gave our costumers major savings, with IQR (Interquartile Range) being the most advanced statistical concept used. While building those projects we encountered architectural and technical challenges. I will present some of them with our road to solution. Our tech stack was: Python-Flask, MariaDB, Elasticsearch, Vue, Boostrap.
Neuroimaging is a rapidly advancing field which requires solving more and more complex challenges related to data analysis. Magnetic resonance scans of the brain are an invaluable source of information for clinicians and researchers but their correct analysis requires advanced statistical knowledge. In more recent years, neuroscientists have gotten access to much larger and complex datasets for which traditional ways of analysis are no longer adequate. I would like to show what kind of information we can get from neuroimages, what are the current challenges in the field and community proposed solutions.
Knowledge graphs are now routinely used for search, question answering, conversation, intelligent dashboards etc. In my talk, firstly, I will define what knowledge graphs are, giving examples of notable both community-driven and industrial knowledge graphs. Then I will explore topics of knowledge extraction for building knowledge graphs and their further development prone to various problems such as incompleteness, bias, inconsistency, ambiguity and others and I will discuss how to overcome those problems.
For several years now, industrial Computer Vision systems have been powered by Deep Learning - also in production. If we should poke our code until it breaks, why would deep learning models gets a free pass? We'll see different ways in which to poke, improve and -above all- robustify a vision model before letting it run in production.
Current artificial intelligence (AI) research efforts are driven by a society where adults see the opportunities of this new technology through the lens of old paradigms of progress. What if people could harness AI applications in novel ways that go beyond problem-solving and specific challenges. We can best begin to explore the potential of AI by inviting children to explore the infinite array of opportunities intelligent technology provides.
Young people growing up with intelligent devices have an intrinsically different understanding of how this technology is embedded in our daily lives and are more open to imagine non-prescribed ways to interact and learn from and with it. Children experience the world through play and make-believe. This sets them at an advantage to explore all possibilities of reading and making the world. In this talk I will show you examples of how children are learning with and from AI and explain why this is a crucial time to involve the next generation in the process of designing a humanistic intelligent applications.