R4年度 人工知能研究センター 国際シンポジウム
AIST Artificial Intelligence Research Center International Symposium 2023
1.開催趣旨
人工知能研究センターでは、「共進化AI」をキーワードとして、人とAIが協調して問題解決に取り組み、その過程を通じて両者が共に向上していく社会の実現を目指しています。また、「実世界に埋め込まれるAI」を併せてキーワードとして掲げ、これまでインターネット空間で発展してきたAI技術を、ものづくり、医療・介護・創薬、モビリティなど幅広い分野での実世界のサービスに浸透させ、人々のさまざまな活動をより豊かなものにしていくための取り組みを行ってきました。
特に現在、新エネルギー・産業技術総合開発機構(NEDO)の委託を受けて、「人と共に進化する次世代人工知能に関する技術開発事業」のプロジェクトを実施しています。このプロジェクトでは、1.人と共に進化するAIシステムの基盤技術開発 2.実世界で信頼できる AI の評価・管理手法の確立 3.容易に構築・導入できるAI技術の開発 の3つの主要なテーマにおいて、国内外の大学・研究機関、企業やその他公的機関とも連携・協働しながら大規模な研究開発を行っています。
この度、本プロジェクトの一環として国際シンポジウムを開催し、最新の研究開発成果を紹介するとともに、AI技術に関する国内外の有識者、専門家等にもご参加いただき今後の「共進化AI」の方向性について議論いたします。さらにこのシンポジウムでは、サイバーフィジカル攻撃、気候変動、デジタル変革などの新しい社会課題/地球規模の問題に対処するための将来の AI技術に関するビジョンも共有します。
皆さまの積極的なご参加をお待ちしております。
2.開催概要
開催日時: | 2023年2月2日(木) 、3日(金) |
開催場所: | 日本科学未来館 未来館ホール(東京都江東区青海2丁目3−6 7階 ) 〔アクセス〕 |
主催: | 国立研究開発法人 産業技術総合研究所 情報・人間工学領域 人工知能研究センター |
日英同時通訳を予定しています。
Simultaneous interpretation between Japanese and English will be served.
3.参加申し込み(参加費:無料)
日本語: こちらからお申し込みください。
英語(In English): Click Here
(他の方に参加の機会をお譲りするためにも、参加ができないと分かった場合は早めのキャンセルをお願いします。)
締切2月1日(水)
募集定員に達し次第、参加登録を締め切らせていただきます。
新型コロナウイルス感染症対策で人数に制限(150人)がございます。
会場での当日参加受付はございません。
4.プログラム
2023/2/2 |
Closed Workshop (Details) (for only allowed participants) |
||
---|---|---|---|
Parallel Session1 (火星:Room Mars) |
Parallel Session2 (金星:Room Venus) |
Parallel Session3 (水星:Room Mercury) |
|
13:00 - 13:30 |
Registration |
||
13:30 ~ 15:30 |
GeoAI Moderators: Dr. Akiyoshi Matono Dr. Masaki Onishi |
Cross-disciplinary knowledge-embedded AI: NLP Moderators: Dr. Hiroya TAKAMURA Prof. Makoto MIWA |
Visual Scene Understanding Moderators: Dr. Ryusuke SAGAWA |
15:30 - 16:00 |
--Coffee Break☕-- |
||
16:00 - 18:00 |
Privacy-preserving Machine Learning Moderators: Dr. Masahiro MURAKAWA Dr. Kyoungsook KIM |
Cross-disciplinary knowledge-embedded AI:Digital Human Moderators: Dr. Mitsunori TADA Dr. Natsuki MIYATA |
Multimodal Signal Processing Moderators: Dr. Jun OGATA Dr. Akira SASOU |
18:00 - 18:30 |
--Break-- |
||
18:30 - 20:30 |
Preliminary Meeting of Open Workshop |
2023/2/3 |
Open Seminar Next-Generation Artificial Intelligence Evolving Together With Humans (for everyone to be registered) |
|
---|---|---|
9:30 - 10:00 |
Registration |
|
10:00 - 10:30 |
Openning Remarks Dr. Junichi TSUJII (The fellow of AIST, The Director of AIRC) |
|
10:30 - 12:10 |
1. Fundamental Technologies for Human-centric AI |
|
          | 10:30 - 10:55 |
Trust in Human, Trust in AI President, Yasuhiro KATAGIRI (Future University Hakodate, Japan) |
          | 10:55 - 11:20 |
From Graphic System Interactions to Embodied Social Experience in XR Shared Spaces Prof. Didier STRICKER (DFKI, Germany) |
          | 11:20 - 11:45 |
The Integration of GeoAI and Streaming Data in Urban Flood Response: A Real Case Study in Taoyuan City Prof. Tien-Yin CHOU (Feng Chia University/GIS, Taiwan) |
          | 11:45 - 12:10 |
Foundation Technology of Visual and Verbal Explanation for Human-AI Evolvement (tentative) Prof. Komei SUGIURA (Keio University, Japan) |
12:10 - 13:30 |
--Lunch time🍱-- |
|
13:30 - 15:10 |
2. AI Trustworthiness for Real-world Adaptation | |
          | 13:30 - 13:55 |
Application of Privacy Preserving Federated Learning in Biomedical Applications – Lessons Learned Dr. Ravi K. MADDURI (Argonne National Laboratory, USA) |
          | 13:55 - 14:20 |
Different Flavors on Differentially Private Federated Learning Prof. Yang CAO (Hokkaido University, Japan) |
          | 14:20 - 14:45 |
Assessment of AI-systems Dr. Maximilian PORETSCHKIN (Fraunhofer, Germany) |
          | 14:45 - 15:10 |
Overview of AISTʼs AI Quality Management Project Dr. Yutaka OIWA (AIST, Japan) |
15:10 ~ 15:30 |
--Coffee Break☕-- |
|
15:30 - 17:35 |
3. Easily Constructed and Integrated AI Technologies | |
          | 15:30 - 15:55 |
Quantum Computation and Artificial Intelligence – From the Lab to the Market Dr. Salvador Elías Venegas-Andraca (Tecnológico de Monterrey, Mexico) |
          | 15:55 - 16:20 |
Location & Privacy: Past, Present and Future Prof. Cyrus SHAHABI (University of Southern California, USA) |
          | 16:20 - 16:45 |
GeoAI Applications: from land use detection to agricultural yield predictions Dr. Marlon NUSKE (DFKI, Germany) |
          | 16:45 - 17:10 |
Pre-training without Natural Images: Introduction of Formula-driven Supervised Learning Dr. Hirokatsu KATAOKA (AIST, Japan) Large-scale Pre-training of Vision Transformers on Synthetic Datasets Prof. Rio YOKOTA (Tokyo Institute of Technology, Japan) |
          | 17:10 - 17:35 |
Variational Deep Learning for Integration of Disjoint Longitudinal Patient Data Dr. Mike PHUYCHAROEN (The University of Manchester, UK) |
17:35 - 17:45 |
Closing |
講演者
Speakers | |
---|---|
![]() |
Prof. Yasuhiro KATAGIRI (Future University Hakodate, Japan)
Yasuhiro Katagiri received his Ph.D in Information Engineering from the University of Tokyo in 1981. He worked in NTT Basic Research Labs. and ATR Research Labs. He was director of ATR Media Information Science Laboratories. He is currently president at Future University Hakodate. He is a fellow of Japanese Society of Cognitive Science, and president of The Japanese Association of Sociolinguistic Sciences.
Trust in Human, Trust in AI Trust is indispensable in successful collaboration and cooperation between humans as well as in the constitution of societies. Rapid advance in AI technology heightens the need to both examine the concept of trust itself and to ensure people's trust in AI. Social psychological research based on Prisoner's dilemma situation concluded that trust is a kind of non-rational cognitive bias, at individual level, underlying human decision making. I will review types of conceptions behind common-sense notion of trust, emphasize the role of dialogue interaction in establishing and maintaining trust, and contrast trust in human and trust in AI. |
![]() |
Prof. Didier STRICKER (DFKI, Germany)
Didier Stricker is a professor in the Computer Science department at the Technical University Kaiserslautern-Landau and scientific director at the German Research Center for Artificial Intelligence (DFKI GmbH). He is head of the research department „Augmented Vision“ and coordinator of the European project SHARESPACE. Previously, Didier Stricker headed the "Virtual and Augmented Reality" department at the Fraunhofer Institute for Computer Graphics (Fraunhofer IGD) in Darmstadt, Germany. His research interests include computer vision, body sensor networks, immersive spaces, and natural user interfaces.
More information: http://av.dfki.de
From Graphic System Interactions to Embodied Social Experience in XR Shared Spaces Augmented Reality has developed greatly over the last 20 years. Nevertheless, current applications are often still limited to simple overlays such as 2D annotation or 3D objects inserted into the scene. However, it has long been clear that automated VR/AR-systems designed to assist or interact with human activity must have some understanding of human behaviour to be effective. Actions and reactions need to match our expectations and information needs to be presented in a way that reflects our own perceptions. Less well understood is how this understanding of behaviour is to be achieved. In this talk, we will introduce different technologies for human workflow capturing and adaptive workflow monitoring with AR. Then, a new concept for seamless remote collaboration in an XR space shared by humans and avatars will be presented and discussed. |
![]() |
Prof. Tien-Yin CHOU (Feng Chia University/GIS, Taiwan)
Tien-Yin (Jimmy) Chou is the director and a lifetime distinguished professor in Geographic Information Systems (GIS) Research Center, Feng Chia University. He is a chair of Open Geospatial Consortium (OGC) Asia Forum and a secretary general of Asia-Pacific Federation for Information Technology in Agriculture (APFITA). He had his Ph.D., Dept. of Resource Development, Michigan State University, USA after MS/BS, Dept. of Soil and Water Conservation, National Chung-Hsing University, Taiwan.
The integration of GeoAI and streaming data in urban flood response: A real case study in Taoyuan City In recent years, many rivers have established water level stations to detect the water level. When a certain water level is reached, warning information will be sent to relevant units for subsequent measures such as bridge and road closures. However, with the recognition ability of AI images improved significantly, this research aims to monitor water level by image recognition through CCTV images which can calculate the water level height more intuitively and further replace the original radar water level gauge or pressure water level gauge. This study uses the current image segmentation of the image deep learning to segment the image of the water surface. Then it uses the virtual water ruler, which the user draws to obtain the water level height by converting the water level height at the intersection of the water surface and the virtual water ruler. In response to the concept of this study, it uses the Deeplab V3 algorithm provided by Google to obtain 500 images in the demonstration area in the morning, noon, and evening. It recognizes the image of the water level by every minute. |
Dr. Ravi K. MADDURI (Argonne National Laboratory, USA)
Ravi is a computer scientist in the Data Science and Learning division at Argonne National Laboratory and at the University of Chicago Consortium for Advanced Science and Engineering. His research interests are in building sustainable, scalable services for science, reproducible research, development of Privacy-enhancing technologies, large-scale data management and analysis using AI and HPC. Ravi leads the DOE-funded PALISADE-X project that is developing the Argonne PPFL (APPFL) framework that uses differentially private (DP) algorithms for training federated learning (FL) models with the biomedical datasets from multiple organizations. Additionally, Ravi is one of the two leads of the DOE collaboration with the VA on the MVP-CHAMPION project where the goal is to develop High Performance Computing and AI to improve health of Veterans. As part of this work, Ravi has been instrumental in creating secure private enclaves to host and analyze sensitive healthcare data. In the past, Ravi led the Globus Genomics project (www.globusgenomics.org), which is being used by thousands of researchers across the world for genomics, proteomics, and other biomedical computations on Amazon cloud and other platforms. He architected the Globus Galaxies platform that underpins Globus Genomics and several other cloud-based gateways realizing the vision of Science as a Service for creating, maintaining sustainable services for science. Ravi plays an important role in applying large-scale data analysis, deep learning to problems in biology. For his work on “Cancer Moonshot” project, he received the Department of Energy Secretary award in 2017.
Application of Privacy Preserving Federated Learning in Biomedical Applications – Lessons Learned AI/ML models are known to be vulnerable to dataset shift and under specification because of the inability of current deep learning methods to learn the causal structure. This problem manifests when a model is deployed in a real test domain, where even simple changes in demographics or image formats could lead to unexpected poor performance, straining credibility. The solution for this is to train deep learning models on as many as real world datasets as possible. But access to biomedical datasets is governed by complex data usage agreements, time-consuming IRBs. One possible solution is to send models to data and develop frameworks to perform secure federated learning with privacy guarantees. In this talk I will present work done under DOE-ASCR funded PALISADE-X project where we developed Secure Computational techniques on Private enclaves, Argonne Privacy Preserving Federated Learning Framework (APPFL) and other strategies to solve challenges in biomedicine. |
|
![]() |
Prof. Yang CAO (Hokkaido University, Japan)
Yang Cao is an Associate Professor at the Division of Computer Science and Information Technology at Hokkaido University. He earned his Ph.D from the Graduate School of Informatics, Kyoto University, in 2017. His research interests lie in the intersections between database, security and machine learning, and have published many papers in these areas including top venues such as ICDE, AAAI, USENIX Security, TKDE. Two of his papers were selected as best paper finalist in ICDE 2017 and ICME 2020. He is a recipient of the IEEE Computer Society Japan Chapter Young Author Award 2019, Database Society of Japan Kambayashi Young Researcher Award 2021.
Towards Differentially Private Federated Learning with Untrusted Server Federated learning has received increasing attention in academia and industry as a new privacy-preserving machine learning paradigm. Unlike traditional machine learning, which requires data collection before training, in federated learning, the clients collaboratively train a model under the coordination of a central server. In particular, the clients only share model updates to the server, and all raw data are stored locally. However, recent studies showed that the model updates might reveal sensitive information to the server. In addition, federated learning itself does not guarantee formal privacy. This talk will review recent advances on differentially private federated learning under untrusted servers, introduce our attempts towards this goal by leveraging LDP, the shuffle model of DP and TEE, and discuss some open problems. |
![]() |
Dr. Maximilian PORETSCHKIN (Fraunhofer, Germany)
Maximilian Poretschkin is head of “Safe AI and AI-Certification” at the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS. In this role, Dr. Poretschkin has led a large variety of projects and AI assessments with global partners from industry and government. Particularly noteworthy is the project “CERTIFIED AI”, which develops testing principles, assessment tools and standards for AI systems and has published one of the first assessment catalogs for AI systems. In addition, Dr. Poretschkin is head of the working group "AI assessment and certification" of the German standardization roadmap AI.
Stages prior to his position at Fraunhofer included a position as a consultant at the strategy consulting firm Bain & Company and a postdoctoral position at the University of Pennsylvania in Philadelphia. Dr. Maximilian Poretschkin studied physics and mathematics in Bonn and Amsterdam and received the doctoral award for the best dissertation at the University of Bonn in 2015. He is an alumnus of the German National Academic Foundation.
Assessment of AI-systems Artificial intelligence is penetrating more and more areas of our lives, taking on increasingly responsible activities. At the same time, it is to be expected that this technology of the future can only develop its full potential if sufficient trust can be created in its use. AI assessments (by independent parties) can be an appropriate measure of promoting this trust. Such assessments are partly required by the currently evolving regulatory landscape of AI. They can also be beneficial on a voluntary basis, for example as a basis for insuring AI systems or for establishing a strong brand. At the same time, AI systems differ greatly from classically programmed software, so that established procedures for testing software fall short of systematically proving that AI systems meet the requirements demanded of them. The talk starts with a short overview of the planned European regulation of AI systems and then presents an AI assessment catalog that was developed at Fraunhofer IAIS and addresses important challenges in the evaluation of AI systems. Finally, some practical examples of AI assessments are presented which have been carried out based on the assessment catalog. |
![]() |
Dr. Salvador Elías Venegas-Andraca (Tecnológico de Monterrey, Mexico)
Salvador E. Venegas-Andraca is a professor of Computer Science at Tecnologico de Monterrey, he is the head of the Quantum Information Processing group as well as the founder and Principal Investigator of the Unconventional Computing Lab.
Salvador holds a DPhil in Physics (2006) and an MSc by research in Computer Vision (2002), both degrees awarded by the University of Oxford, as well as an MBA (Hon) and a BSc (Hon) in Digital Electronics and Computer Science, these two last degrees awarded by Tecnologico de Monterrey.
Salvador is a leading scientist in the field of quantum walks, the founder of quantum computing in Mexico, cofounder of the field Quantum Image Processing and his research interests include quantum algorithms, the analysis of biological information via quantum algorithms, quantum machine learning, quantum cybersecurity, and the algorithmic analysis of NP-hard/NP-complete problems.
Salvador is the author of Quantum Walks for Computer Scientists (2008), the first book ever written on the scientific field of quantum walks, and the co-author of "Quantum Image Processing" (Springer, 2020), the first book ever written fully focused on storing, processing, and retrieving visual information using quantum mechanical systems.
Quantum Computation and Artificial Intelligence – From the Lab to the Market In this talk I shall define the notion of quantum computing and its constituent parts. Moreover, I shall provide a succinct introduction to the history of quantum computing followed by a concise review of the mathematical and computational foundations of this discipline. I will then talk about quantum algorithms and present several results of my research group with an emphasis on the multidisciplinary nature of quantum computing and two key areas in this field: artificial intelligence and cybersecurity. I will finish by briefly addressing some key features of the emerging high-tech market of quantum technologies. |
![]() |
Prof. Cyrus SHAHABI (University of Southern California, USA)
Cyrus Shahabi received his B.S. in Computer Engineering from Sharif University of Technology and his M.S. and Ph.D. Degrees in Computer Science from the University of Southern California (USC). He is a Professor of Computer Science, Electrical & Computer Engineering, and Spatial Sciences; Helen N. and Emmett H. Jones Professor of Engineering; and the director of the Integrated Media Systems Center (IMSC) at USC. He was also the chair of the Computer Science Department at USC from 2017 to 2022. He was co-founder of two USC spin-offs, Geosemble Technologies and Tallygo. He authored two books and more than three hundred research papers in databases, GIS, and multimedia with 14 US Patents. He was an Associate Editor of IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Knowledge and Data Engineering, and VLDB Journal. He is currently on the editorial board of the ACM Transactions on Spatial Algorithms and Systems and ACM Computers in Entertainment. Dr. Shahabi is a recipient of the ACM Distinguished Scientist award, the U.S. Presidential Early Career Awards for Scientists and Engineers (PECASE), and the NSF CAREER award. He is a fellow of the National Academy of Inventors (NAI) and IEEE.
Location & Privacy: Past, Present and Future In this talk, I will review various types of mobile applications, from Location-Based-Services (LBS) and Ride-Sharing to pandemic risk-prediction applications, which collect, analyze and use location data to provide various services to users. I explain some of the main underlying technologies (e.g., kNN queries and contact-network analysis) that enable these applications from a spatial data management and analysis (aka GeoAI) perspective. I will also discuss some of the privacy concerns with these applications due to their location leaks and review several approaches to protect location privacy without sacrificing the utility of these applications. I will wrap up by presenting some new envisioned applications and their corresponding open problems. |
![]() |
Dr. Marlon NUSKE (DFKI, Germany)
After obtaining his PhD in Physics from University of Hamburg, Marlon Nuske has started as a Senior Researcher at the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern. There he is the lead of the Earth and Space Applications team since September 2021. The focus of his research is on transferring the long-lasting experience of DFKI in the domain of traditional image analysis to earth observation applications. He is also the lead of the AI4EO Solution Factory (https://www.ai4eo-factory.de/). This project funded by the ESA InCubed Programme (https://incubed.esa.int/portfolio/ai4eo-solution-factory/) aims at harnessing earth observation data and AI for a diverse set of industrial applications by reusing the underlying data pipelines and machine learning models.
GeoAI Applications: from land use detection to agricultural yield predictions Geospatial Artificial Intelligence sees a rapidly increasing amount and diversity of applications. Many of these include the automatic analysis of satellite imagery. While there are huge labelled datasets for traditional image analysis, we usually have to work with limited amounts of labelled data for satellite imagery. In this talk, we present some of our GeoAI applications such as agricultural yield predictions, land use land cover (LULC) classification and flood mapping. We highlight different approaches to making use of the vast amount of geospatial input data sources, while being limited in the amount of labelled data accessible. |
![]() |
Dr. Hirokatsu KATAOKA (AIST, Japan)
Hirokatsu Kataoka received his Ph.D. in engineering from Keio University in 2014. He is a Senior Researcher at National Institute of Advanced Industrial Science and Technology (AIST). His research interest includes computer vision and pattern recognition, especially in large-scale dataset for image and video recognition. He has received ACCV 2020 Best Paper Honorable Mention Award, AIST 2019 Best Paper Award, and ECCV 2016 Workshop Brave New Idea.
Pre-training without Natural Images: Introduction of Formula-driven Supervised Learning Is it possible to use convolutional neural networks pre-trained without any natural images to assist natural image understanding? The presentation introduces a novel concept, Formula-driven Supervised Learning (FDSL). We automatically generate image patterns and their category labels by assigning fractals, which are based on a natural law. Although models pre-trained with the proposed Fractal DataBase (FractalDB), a database without natural images, do not necessarily outperform models pre-trained with human annotated datasets in all settings, we are able to partially surpass the accuracy of ImageNet/Places pre-trained models. |
![]() |
Prof. Rio YOKOTA (Tokyo Institute of Technology, Japan)
Rio Yokota is a Professor at the Global Scientific Information and Computing Center, Tokyo Institute of Technology. His research interests lie at the intersection of high performance computing, linear algebra, and machine learning. He is the developer numerous libraries for fast multipole methods (ExaFMM), hierarchical low-rank algorithms (Hatrix), and information matrices in deep learning (ASDL) that scale to the full system on the largest supercomputers today. He has been optimizing algorithms on GPUs since 2006, and was part of a team that received the Gordon Bell prize in 2009 using the first GPU supercomputer. Rio is a member of ACM, IEEE, and SIAM.
Large-scale Pre-training of Vision Transformers on Synthetic Datasets The true potential of vision transformers (and transformers in general) is realized only when pre-trained on extremely large datasets. Recent advances in self-supervised learning on open source text + image pairs such as LAION have solved the issue of labeling such large datasets. However, at such scale, it is infeasible to manually remove ethical issues in the dataset such as societal bias, copyright infringement, and privacy. In the present work, we scaled up formula-driven supervised learning (FDSL) to investigate the possibility of using synthetic datasets to pre-train vision transformers at large scale. We show that ViT-Base pre-trained on synthetic datasets can outperform ImageNet-21k on multiple downstream tasks. |
![]() |
Dr. Mike PHUYCHAROEN (The University of Manchester, UK)
Mike Phuycharoen received a MEng in Electronic & Electrical Engineering from University of Bath, and PhD in Computer Science from the University of Manchester. Prior to becoming a computational biologist he worked as a mobile software developer. His current research involves machine learning methods for the analysis of genomic sequences to discover features of DNA-protein binding and chromatin state, tracking of RNA molecules in multi-dimensional microscopy images, as well as deep learning algorithms for integration and normalization of multi-modal cytometry data.
Variational Deep Learning for Integration of Disjoint Longitudinal Patient Data Patient data of cellular measurements often comes from heterogeneous sources with many types of technical variability. Our method is a flexible variational deep learning framework for integration of disjoint cytometry data from large cohorts of patients, originating from multiple sources and acquisition batches. We benchmark various types of conditional modelling and data normalization, and subsequently apply our method to integrate longitudinal flow cytometry data collected from Covid-19 patents admitted to four hospitals in Manchester, UK. |
![]() |
Dr. Luis Alberto Muñoz Ubando (Tecnológico de Monterrey, Mexico)
Dr. Luis Alberto Muñoz Ubando is a Computer Technician at IPN-CECYT 9 "Juan De Dios Batiz", Electronic Systems Engineer (Tec de Monterrey, 93), Master in Scientific Computing (INRIA, FR, 94), Doctor in Images, Vision and Robotics (INRIA, FR, 99), Post-Doc in Industrial Robotics (Oxford, UK, 98-00), Sabbatical in Cognitive Vision (UT Wien, Austria 05-06). He has been Director of Innovation for Grupo Plenum in Mérida, Yucatán, since 2008. In addition to Televisa, Banamex, McDonald's, ITESM, ITAM, UA, and UNAM, he founded and participated in the creation of 14 companies in Mexico and the United States. He has worked as a researcher at the University of Pisa in Italy (96), Tokyo in Japan (95), Karlsruhe in Germany (94-5); Stanford University (2002) and the University of Massachusetts at Lowell (2006). From 2000 to 2009, he worked for the Federal Government at the National Council of Science and Technology (CONACYT) and the UNAM Faculty of Sciences. During his time at the UADY, he participated in the creation of the Bachelor's Degree in Computer Engineering and the Master's Degree in Mathematical Sciences. As a thesis director, he has directed more than 40 undergraduate, 15 master's and 6 doctoral theses. He has initiated and participated in various graduate and engineering programs at the university. During 1999, the illustrated "Dictionnaire Illustré de la Robotique" was translated into Spanish. He founded the Robotics Institute of Yucatán in Mérida (www.triy.org) in 2008 with the purpose of developing early science and technological skills in more than 500 children and young people. He holds patents in energy, logistics, e-health, sustainability, and education. In addition to being a member of CACEI, he also serves on the committee of the Journal of Software Engineering for Robotics (www.joser.com), as well as serving as an evaluator for several journals in the fields of scientific research, technological development, and innovation management. He mentors Startup Mexico, Talentum, and other talent generation organizations. He was part of the Stanford Go-to-Market program in Mexico's first generation. In 2015, he became a regular member of the Mexican Academy of Computing. He is a guest columnist for El Financiero and a full professor at Tec de Monterrey.
Closed workshop invited speaker |
Prof. Jiajun Wu (Stanford University, USA)
Jiajun Wu is an Assistant Professor of Computer Science at Stanford University, working on computer vision, machine learning, and computational cognitive science. Before joining Stanford, he was a Visiting Faculty Researcher at Google Research. He received his PhD in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. Wu's research has been recognized through the AFOSR Young Investigator Research Program (YIP), the ACM Doctoral Dissertation Award Honorable Mention, the AAAI/ACM SIGAI Doctoral Dissertation Award, the MIT George M. Sprowls PhD Thesis Award in Artificial Intelligence and Decision-Making, the 2020 Samsung AI Researcher of the Year, the IROS Best Paper Award on Cognitive Robotics, and faculty research awards from JPMC, Samsung, Amazon, and Meta.
Closed workshop invited speaker |
|
Dr. Boqing Gong (Google, USA)
Boqing Gong is a Staff Research Scientist at Google Research. His research in machine learning and computer vision focuses on generalization, efficiency, and the visual analytics of objects, scenes, human activities, and their attributes. Before joining Google in 2019, he worked at Tencent and was a tenure-track Assistant Professor at the University of Central Florida (UCF). He received an NSF CRII award (so-PI) in 2016 and an NSF BIGDATA award (PI) in 2017, both of which were the first of their kind ever granted to UCF. He earned a Ph.D. degree in 2015 at the University of Southern California, where the Viterbi Fellowship partially supported his work. Boqing has served as a program co-chair for WACV 2023, tutorial co-chair for CVPR 2022, and (senior) area chair for CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, AISTATS, and AAAI.
Closed workshop invited speaker |
|
Prof. Chen Sun (Brown University, USA)
Chen Sun is an assistant professor of computer science at Brown University, studying computer vision, machine learning, and artificial intelligence. Chen received his Ph.D. from the University of Southern California in 2016, and bachelor degree from Tsinghua University in 2011.
Closed workshop invited speaker |
お問い合わせ先
(電子メールでお願いいたします。)