Eventi e seminari

AVVISO DI SEMINARIO

 

Auditing Deep Learning processes through Kernel-based Explanatory Models

 

November, 7th 2019 13:00-14:00

Aula Archimede (Macroarea di Ingegneria)

 

 SPEAKER:

Danilo Croce

(Università degli Studi di Roma “Tor Vergata”)

 

Abstract. While NLP systems become more pervasive, their accountability gains value as a focal point of effort. Epistemological opaqueness of nonlinear learning methods, such as deep learning models, can be a major drawback for their adoptions. In this paper, we discuss the application of Layerwise Relevance Propagation over a linguistically motivated neural architecture, the Kernel-based Deep Architecture, in order to trace back connections between linguistic properties of input instances and system decisions. Such connections then guide the construction of argumentations on network’s inferences, i.e., explanations based on real examples, semantically related to the input. We propose here a methodology to evaluate the transparency and coherence of analogy-based explanations modeling an audit stage for the system. Quantitative analysis on two semantic tasks, i.e., question classification and semantic role labeling, show that the explanatory capabilities (native in KDAs) are effective and they pave the way to more complex argumentation methods.

 

_ _ _

 

AVVISO DI SEMINARIO

 

Hey, Merry Men! Robin-Hood Artificial Intelligence is Calling You!

 

Mercoledì 4 Settembre, ore 14:00

Aula Archimede (Macroarea di Ingegneria)

 

 SPEAKER:

Fabio Massimo Zanzotto

(Università degli Studi di Roma “Tor Vergata”)

 

Abstract. Artificial Intelligence may accelerate the fourth Industrial Revolution exploiting Human Knowledge stored in Personal Data. Will the job market and your future job survive this fourth Industrial Revolution? All of us are invited to think about it.

Short cv. Fabio Massimo Zanzotto is Associate Professor at the Department of Enterprise Engineering of the University of Rome Tor Vergata. He has been working for more than 20 years in the Artificial Intelligence (AI) field and he is author of several publications in the area of Natural Language Processing (NLP), Machine Learning applied to NLP and to Medicine. Recently, he has realized how disruptive the impact of AI research can be. Consequently, he has redesigned his research agenda to help to contribute to a fairer AI.

_ _ _

 

AVVISO DI SEMINARIO

 

Semantic Parsing and Beyond to Create a Commonsense Knowledge Base

 

Mercoledì 11 Aprile, ore 14:00

Aula Archimede (Macroarea di Ingegneria)

 

 SPEAKER:

Valerio Basile

(Università degli Studi di Torino)

Abstract. Today’s Web represents a huge repository of human knowledge, not only about facts, people, places and so on (encyclopedic knowledge), but also about everyday beliefs that average human beings are expected to hold (commonsense knowledge). Automated agents such as domestic robots and virtual assistants need to be equipped with this kind of knowledge in order to be autonomous in their functions. However, the majority of the commonsense knowledge on the Web is present in the form of natural language, rather than structured formats ready to be processed by machines. Semantic Parsing and Word Sense Disambiguation are two well-studied tasks in NLP that aim at extracting the structure and lexical semantics from natural language, respectively. During my appointment at Inria Sophia Antipolis on the EU project ALOOF (Autonomous Learning of the Meaning of Objects [1]), I worked on combining the two tasks in order to “read” a large quantity of text on the Web and collect many instances of structured grounded knowledge, under the common framework of Frame Semantics. After creating a corpus and parsing it with the pipeline we developed KNEWS (Knowledge Extraction With Semantics [2]), we use clustering techniques to filter out the noise and distill the most prototypical knowledge about common concepts, particularly objects, locations and actions. The final result is a Linked Data language-neutral dataset, subset of the commonsense knowledge base DeKO (Default Knowledge about Objects [3]).

[1] https://project.inria.fr/aloof/
[2] https://github.com/valeriobasile/learningbyreading
[3] http://deko.inria.fr/

Short bio. Valerio Basile is a postdoc researcher at University of Turin. During his PhD at the University of Groningen he worked on formal representations of meaning, including the construction of the Groningen Meaning Bank, a semantically annotated corpus of English; linguistic annotation and gamification, including the development of Wordrobe, an online Game With A Purpose for semantic annotation; natural language generation, particularly starting from logical formulas as abstract meaning representations. During a two-year postdoc at Inria Sophia Antipolis, France, he worked on commonsense knowledge base building, using machine reading and frame semantics, in the context of the European e ALOOF (Autonomous Learning of the Meaning of Objects). In parallel, his research interests included multilingual semantic parsing for the ERC project MOUSSE, and sentiment analysis, with the organization of the SENTIPOLC, ABSITA and SemEval Emoji classification shared tasks.

_ _ _

 

AVVISO DI SEMINARIO

 

Annotator behaviour mining for natural language processing

   

Martedì 7 marzo 2017, ore 11.30

Aula Archimede della Macroarea di Ingegneria

       

SPEAKER:

Tokunaga Takenobu

School of Computing, Tokyo Institute of Technology

  ABSTRACT:

The last two decades witnessed a great success of revived empiricism in natural language processing (NLP) research. Namely, the corpus construction and machine learning (CC-ML) approach has been the main stream of NLP research, where corpora are annotated for a specific task and then they are used as training data for machine learning (ML) techniques to build a system for the task. From a viewpoint of annotation, this approach utilises only the results of annotation, i.e. annotated corpora. In this talk we will introduce our recent attempts that aims at utilising the information obtained during the annotation process as well as the annotation results. To be more concrete, we collect data of annotator behaviour during their corpus annotation and utilise it for improving NLP systems. We starts with the overview of our project followed by two studies in which annotator’s eye tracking data is utilised for the named entity recognised task and predicate argument structure analysis.

Le slide del seminario possono essere scaricate da TokunagaTakenobu@TorVergata.

short cv

Takenobu Tokunaga is a professor at School of Computing, Tokyo Institute of Technology. He received his Ph.D.  from Tokyo Institute of Technology in 1991. His current interests include natural language processing, in particular, building and managing language resources, applications of language technologies to intelligent information access and education, and dialogue systems.

_ _ _

 

AVVISO DI SEMINARIO

 

Machine Reading: Central  Goal(s) and Promising (?) Approaches

 

Lunedì 14 dicembre 2015, ore 11.00

Aula Archimede della Macroarea di Ingegneria

 

SPEAKER:

 

David Israel

Artificial Intelligence Center at SRI

      ABSTRACT:   In 2009, the Defense Advanced Research Projects Agency (DARPA) initiated what was intended to be a 5-year project (though in the end, it lasted only 3) aimed at exploring the principles behind the design and implementation (at least in prototype form) of systems that could take more-or-less arbitrary “factually informative” English text and understand it. I had the honor and privilege of being the Principal  Investigator of the SRI-led team, one of three large teams in the Program.  That privilege meant I didn’t really have to do any actual work, beyond (i) being ultimately responsible for the progress of the team and for reporting said progress to DARPA  on, tracking budgets, etc., etc. and (ii) that honor mean I was free to think large-ish thoughts about how one should conceive of the goals of such a project and how to put our team’s approach, focused on large-scale joint inference, into a wider research context.  I promise I will not talk about (i) in this seminar; so if you’ve read and understood this abstract, you should know what I will be talking about.     Short bio: Dr. Israel recently retired from the Artificial Intelligence Center at SRI, with the title of Principal Scientist (Emeritus).  Prior to his retirement, he worked in a number of areas in AI, including Knowledge Representation and Reasoning, Theory of (Rational) Action and various parts of Natural Language Processing, including Formal Semantics and the Theory and Design of Machine Reading systems. Le diapositive della presentazione possono essere scaricate da questo link.

  _ _ _

  

AVVISO DI SEMINARIO

  

Kernel Methods for Structured Learning

in Statistical Language Processing

 

Giovedì 22 Ottobre, ore 15:00

Aula “Archimede” 

SPEAKER: 

Danilo Croce

Gruppo di Ricerca in Intelligenza Artificiale Univ. Tor Vergata

Abstract. In recent years, machine learning (ML) has been more and more used to acquire effective models to solve complex tasks in different disciplines, ranging from Machine Vision to Information Retrieval (IR) or Natural Language Processing (NLP). Within this scenario, Kernels Methods provide a powerful paradigm for the automatic induction of models by characterizing similarity functions between data examples, either represented in continuous domains or over discrete structures, such as graphs and tree collections. Kernels are appealing as their application in Web Application allows adopting a Structured Learning paradigm where ML algorithms are directly applied to complex structures representing linguistic information, without the need of complex feature engineering. In this talk the adoption of Kernel Methods within state-of-the-art ML algorithms for Statistical Language Processing will be introduced, in order to show how to directly use discrete but complex structures within learning processes. Several linguistic tasks benefiting from the application of Kernel Methods will be discussed, such as Question Answering, Sentiment Analysis or Spoken Language Understanding in the context of Human Robot Interaction. Finally, some challenges for future research, e.g. issues of Scalability of kernel-based methods when applied in “Big Data” scenarios will be introduced.   Short bio: Danilo Croce received Ph.D. in Informatics Engineering from the University of Roma Tor Vergata since 2012. Currently, he is an Assistant Professor at the Dept. of Enterprise Engineering, he is a member of the SAG@ART group at the same university. His expertise concerns theoretical and applied Machine Learning in the areas of Natural Language Processing, Information Retrieval and Data Mining. In particular, he is interested in innovative kernels within support vector and other kernel-based machines for advanced syntactic/semantic processing. Author of more than 50 scientific publications, received a few best paper awards in international conferences. For more information, see http://sag.art.uniroma2.it/people/croce/.

_ _ _

 

La prospettiva semiotica nella modellazione concettuale

 

Giovedì 21 maggio 2015, ore 11.00

Aula Archimede della Macroarea di Ingegneria

 SPEAKER:

 Guido Vetere

 IBM Italia, Center for Advanced Studies

    ABSTRACT:   I modelli concettuali con cui rappresentiamo le conoscenze nei sistemi informativi si fondano generalmente sull’apparato della logica dei predicati, in particolare i modelli basati su frazioni computabili della logica dei predicati del primo ordine, detti ontologie (‘discorsi su ciò che esiste’). Così come per la logica, la natura dei predicati e della relazione di interpretazione che li lega ai loro oggetti non viene specificamente indagata. Invece, così come la semiotica ha individuato già nell’800 diverse tipologie di segni, si possono scorgere facilmente, nelle moderne ontologie, diversi tipi di predicati e di interpretazioni. Benché la differenza tra concetti lessicali del linguaggio ordinario, i concetti formalizzati nelle teorie delle scienze naturali, quelli riferiti a normative, le categorie metafisiche, ecc. sia evidente, i formalismi e le metodologie alla base delle tecnologie informatiche non supportano né incoraggiano un trattamento consapevole di tali specificità, pur se rilevante per molte applicazioni. Recuperare tali differenze implica guardare ai concetti delle ontologie (informatiche) non come simboli di predicato in attesa di una qualsivoglia interpretazione, ma come oggetti semiotici facenti parte di specifici processi di comunicazione.

Breve cv

Guido Vetere è dirigente di ricerca presso IBM Italia, ed associato all’Istituto di Scienze e Tecnologie della Cognizione del CNR. E’ in IBM dal 1988, dove ha svolto attività di ricerca e sviluppo in vari ambiti dell’Intelligenza Artificiale e coordinato la partecipazione di IBM in progetti di ricerca europei. Dal 2005 è direttore del Centro Studi Avanzati di IBM Italia, con sedi a Roma e Trento. Nel 2012 ha ricoperto il ruolo di coordinatore internazionale dei Centri Studi IBM. E’ autore di numerose pubblicazioni nel campo delle tecnologie semantiche e della linguistica computazionale. Collabora col Sole 24 Ore (Nòva) alla divulgazione di temi legati alla società dell’informazione e l’intelligenza artificiale. E’ Presidente dell’associazione Senso Comune (www.sensocomune.it),  per la costruzione di una base di conoscenza aperta della lingua italiana.  Attualmente, i sui interessi si rivolgono principalmente al rapporto tra lessico e ontologia, alla rappresentazione della conoscenza e il suo accesso attraverso il linguaggio naturale.

_ _ _

Mercoledì 21 gennaio 2015, ore 10.30

Aula Convegni della Macroarea di Ingegneria

 

SPEAKER:

Luigia Carlucci Aiello

 DIAG, Università La Sapienza, Roma

Luigia Carlucci Aiello, professore ordinario di Intelligenza Artificiale, dal 1982 è alla Sapienza Università di Roma, Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti (DIAG). Presiede il CCL e coordina il Dottorato in Ingegneria Informatica; dal 2006 al 2010 è direttore del Dipartimento di Informatica e Sistemistica, ora DIAG; dal 2010 al 2013 presiede la Facoltà di Ingegneria dell’Informazione, Informatica e Statistica. Costituisce il gruppo di ricerca in Intelligenza Artificiale del DIS-DAG, e conduce ricerche in intelligenza artificiale (rappresentazione della conoscenza e ragionamento automatico) e sue applicazioni. Più recentemente si occupa di robotica cognitiva. Fondatrice, primo presidente, ora socio onorario della Associazione Italiana per l’Intelligenza Artificiale. Fellow dell’AAAI e dell’ECCAI, presidente del Board of Trustees di IJCAI. Nel 2002 riceve un dottorato honoris causa dall’Università di Linköping (Svezia). Nel 2009 riceve il “Donald Walker Distinguished Service Award: for her substantial contributions and extensive service to the field of Artificial Intelligence throughout her career.” Nel 2014 le viene conferito analogo premio da ECCAI.

L’Intelligenza Artificiale è morta,

anzi, è più viva che mai

ABSTRACTL’Intelligenza Artificiale è sempre stata al centro di grandi dibattiti sulla sua fattibilità e sul suo “stato di salute”. In questa presentazione cercherò di riassumere i grandi dibattiti ed evidenziare alcuni risultati  applicativi recenti molto significativi. Questi accendono gli entusiasmi,  ma al contempo riaccendono le preoccupazioni sugli impatti economici e sociali e sul potenziale distruttivo. Rifletteremo infine sulle  linee di tendenza della ricerca. Le diapositive della presentazione possono essere scaricate da questo link.

_ _ _

 Giovedì 23 Ottobre 2014, ore 15

 Aula Paroli

Dipartimento di Ingegneria dell’Impresa

 

 SPEAKER

 Giuseppe Longo

Directeur de Recherche (DRE) CNRS at Centre Interdisciplinaire Cavaillès, (République des Savoirs, Collège de France et l’Ecole Normale Supérieure, Paris Abstract

La Macchina a Stati Discreti: conseguenze scientifiche della “metafora digitale”

Abstract La teoria dell’informazione nella sua ramificazione in teoria algoritmica dell’”elaborazione”  (Turing, Kolmogorov, Chaitin …) e in teoria della “trasmissione dei dati” (Shannon, Brillouin) è un quadro estremamente ricco basato sul discreto (digitale, numerico)  dei tipi di dati. Le due teorie, se esaminate dal punto di vista della causalità fisica, sono Laplaciane, vale a dire la determinazione implica la prevedibilità (le teorie sono fatte per seguire esattamente e correttamente le istruzioni, ovvero per “iterare allo stesso modo” tutti i calcoli e trasmissioni dati  – e funzionano!). Il loro uso, sulla base delle nozioni di senso comune di “informazione” e “programma”, ha segnato la teorizzazione biologica, sotto l’egemonia della biologia molecolare. Dopo aver richiamato alcuni elementi sulla loro origine nel dibattito sui fondamenti della matematica, vedremo come questi meccanismi, strumenti straordinari e nuovi per l’interazione umana, hanno deviato la comprensione del “vivente” proiettando sul concetto di organismo, senza dirlo, una fisica vecchia di più di 100 anni, così come una visione della variabilità biologica e biodiversità ridotta al concetto di “rumore” della teoria dell’informazione. Per fortuna, stiamo uscendo dal mito della completezza dell’informazione molecolare, o genetica, come codifica digitale dell’organismo.

_ _ _

Thursday October 23rd, 2014, h. 15.00

 Aula Paroli

Dipartimento di Ingegneria dell’Impresa

 

 SPEAKER

 

Giuseppe Longo

Directeur de Recherche (DRE) CNRS at Centre Interdisciplinaire Cavaillès, (République des Savoirs, Collège de France et l’Ecole Normale Supérieure, Paris Abstract

La Macchina a Stati Discreti: conseguenze scientifiche della “metafora digitale”

ABSTRACT La théorie de l’information, dans son branchement en théorie algorithmique de l”’élaboration” (Turing, Kolmogorov, Chaitin …) et théorie de la ”transmission” (Shannon, Brillouin), est un cadre très riche basé sur le discret (digital, numérique) des types de données. Les deux théories, si examinées du point de vue de la causalité physique, sont laplaciennes, c’est à dire la détermination implique la prédictibilité (elles sont faites pour suivre les instructions exactement et correctement, voire pour ”itérer à l’identique” tous calculs et transmission – et elles marchent !). Leur usage, moins métaphorique que sur la base du sens commun des notions d’information et de programme, a marqué la théorisation biologique, sous l’hégémonie de la biologie moléculaire. En rappelant quelques éléments de leur origine dans le cadre du débat sur les fondements des mathématiques, on verra comment ces machiens, des outils extraordinaires et nouveaux de l’interaction humaine, aient dévoyé l’intelligibilité du vivant en projetant sur l’organisme, sans le dire, une physique vieille de plus de 100, ainsi qu’une vision de la variabilité et diversité biologique réduite au ”bruit” informationnel. Fort heureusement, on est en train de sortir du mythe de la complétude de l’information moléculaire, voire génétique, comme codage digital de l’organisme.   –

 _ _ _

 

 Mercoledì 24 Settembre 2014, ore 12

 Aula Paroli

Dipartimento di Ingegneria dell’Impresa

__

 SPEAKER

 Franco Cutugno, Antonio Origlia

Università di Napoli Federico II

Abstract

Syllable technology: theory and applications

 Abstract

Speech is multi-layered. In the same signal, using different codings, a wide range of different messages are conveyed. From segments to intonation, strict “textual” meaning, paralinguistic, emotions, data about the speaker and her attitudes, are concurrently available in the speech signal. The listener then applies a number of strategies to “read” all these levels in parallel. Brain is used massively and many of its areas are involved in the decoding process. It is widely accepted that, in this scenario, syllables are central. Their temporal properties are stable and coherent, the speech chain finds in syllables a way for chunking and obtained units, even if are not directly associated to the lexical meaning, appear to be fundamental in the decoding of most of the information layers encapsulated in the signal.

We will give some evidence for this complex architectural interpretation and we will illustrate a system for a language-independent automatic syllable segmentation algorithm. The talk will continue showing how syllable are used in automatic speech analysis systems: from speech recognition and prosodic interpretation up to emotion recognition.

 Giovedì 8 maggio 2014, ore 11.00

Aula Convegni della Macroarea di Ingegneria

__

 SPEAKER

 Oded Cohn

 Director of IBM Research – Haifa

__

Cognitive Systems – Oded – Rome 2014_05_08

ABSTRACT

The talk will describe IBM Watson’s technology progression since its public introduction in 2011 on Jeopardy! which was recognized as a milestone in the history of computer science. Watson represents a whole new class of industry-specific solutions called cognitive systems. These new computing systems are essential to helping us access and gain insight from the huge volumes of information being created today. Rather than being programmed to anticipate every possible answer or action needed to perform a function or set of tasks, cognitive computing systems are trained using artificial intelligence (AI) and machine learning algorithms to sense, predict, discover, infer, and, in some ways, think– all in an effort to help us deal with today’s complex decisions. Cognitive systems augment human intellect, boosting productivity and the creativity of individuals, teams, researchers. These systems are capable of transforming industries and solution areas. The talk will touch some of the cognitive work being done at IBM Research around the world, and take a closer look at solutions from the IBM Research lab in Haifa.