CMU-MLD = AIRC Joint Workhop on AI Co-evolving with Humans in the Real-World
- 13:30 - 14:00
人工知能研究センター 麻生 英樹 副研究センター長
「Introduction of AIRC, AIST
- An open innovation hub for AI collaborating with humans in the real world」
AI, which has been developing very rapidly mainly based on machine learning
using the big data gathered through services on the internet,
is now being incorporated into various services in the real world and becoming
the most important technological infrastructure for the data-driven smart society.
AIRC is established in May 2015 to serve as an open innovation hub
for promoting large-scale AI research
in collaboration with researchers from Japan and worldwide.
Since its establishment, AIRC has been focusing on the research toward
the AI collaborating with humans in the real-world, where explainability,
interpretability, and understandability of AI are very important.
In this talk, I would like to introduce current activities of AIRC briefly
and discuss the future direction of AI research.
- 14:00 - 15:00
米国カーネギーメロン大学 Manuela Veloso教授（Head of J.P. Morgan AI Research）
「Towards a Lasting Human-AI Interaction」
Artificial intelligence, including extensive data processing,
decision making and execution, and learning from experience,
offers new challenges for an effective human-AI interaction.
This talk delves into multiple roles humans can have in such interaction,
as well as the underlying challenges to AI in particular
in terms of collaboration and interpretability.
The presentation is grounded within the context
of autonomous mobile service robots, and applications to other areas.
Manuela M. Veloso has recently joined J.P.Morgan Chase to create and head an Artificial Intelligence (AI) Research Center. Veloso is on leave from Carnegie Mellon University (CMU) where she is Herbert A. Simon University Professor in the School of Computer Science, and where she was the Head of the Machine Learning Department until June 2018. She researches in AI, Robotics, and Machine Learning. At CMU, she founded and directs the CORAL research laboratory, for the study of autonomous agents that Collaborate, Observe, Reason, Act, and Learn. Veloso and her students research a variety of autonomous robots, including mobile service robots and soccer robots. Veloso is AAAI Fellow, ACM Fellow, AAAS Fellow, and IEEE Fellow, Einstein Chair Professor of the Chinese National Academy of Science, the co-founder and past President of RoboCup, and past President of AAAI.See www.cs.cmu.edu/~mmv for further information, including publications.
【CORAL Group - Carnegie Mellon University】
Welcome to the CORAL research group, led by Professor Manuela Veloso. We research on the scientific and engineering challenges of creating teams of intelligent agents in complex, dynamic, and uncertain environments, in particular adversarial environments, such as robot soccer.
- 15:00 - 15:15 休憩
- 15:15 - 15:45
機械学習研究チーム 椿 真史 研究員
「Graph Neural Networks for Molecules：
Interpretable Applications for Biological and Material Data」
Graph neural networks (GNNs) for molecules have a potential to be applied
to bioinformatics, chemoinformatics, and material informatics.
For example, in bioinformatics, the prediction of compound-protein interactions
plays an important role in the virtual screening for drug discovery.
As another example, in material informatics, the discovery of molecules
with specific properties is crucial to developing effective materials.
In this presentation, we introdue our recently proposed new GNN models
for these problems; in particular, the models involving some aspects
derived from the biological and material knowledge.
We believe that this leads to the interpretable applications
for bioinformatics and material informatics.
- 15:45 - 16:45
米国カーネギーメロン大学 Pradeep Ravikumar准教授
「Explainable Artificial Intelligence via Representer Points」
As machine learning systems start to be more widely used,
we are starting to care not just about the accuracy and speed
of predictions, but also why the ML system made its specific predictions.
In the case of state of the art machine learning models however,
even machine learning experts do not have a clear understanding
of why say a deep neural network makes a particular prediction.
We propose to explain the predictions of a specific class of ML models,
namely deep neural networks, by pointing to the set of what we call
representer points in the training set, for a given test point prediction.
Specifically, we show that we can decompose the preactivation prediction
of a neural network into a linear combination of activations of training points,
with the weights corresponding to what we call representer values,
which thus capture the importance of that training point
on the learned parameters of the network.
Our method is scalable enough to allow for real-time explanations and feedback.
(Joint work with Chih-Kuan Yeh, Joon Sik Kim, Ian En-Hsu Yen)
Associate Professor Pradeep Ravikumar's Home Page
- 16:45 - 17:00
産業技術総合研究所 人工知能研究センター 麻生 英樹 副研究センター長