We developed a real-time system to recognize daily objects, which gradually improves the recognition accuracy by changing the angle of viewing the object by moving the camera. We newly proposed a deep learning method with pose estimation module. We got the first prize in an international competition on 3D object retrieval. In addition, we constructed a publicly available multi-view image dataset with 132 items in 12 categories of daily necessities.
【Keywords】Object recognition, deep learning, multi-view image dataset
A social need exists for introducing life support robots. However, the computing power that can be implemented on a life support robot is limited from the viewpoints of cost, size, and power consumption making it difficult to load an image analysis algorithm. This research aims to develop a system that can divide the processing for a computationally expensive convolutional neural network (CNN) between the robot and cloud thereby reducing the computing load on the robot. This system can take privacy into account by transmitting a feature map obtained through convolutional processing instead of a direct image. The extent to which convolutional processing is performed on the robot and the remainder on the cloud can be dynamically adjusted for more efficient processing according to the respective computing capabilities of the robot and cloud, communication conditions, etc.
【Keywords】robotics, cloud recognition engine
With the aim of creating a recognition system capable of recognizing a wide variety of items, this research will construct a cloud database for efficient recognition of everyday items and convenience store products targeting robots for manufacturing sites, life support, and service. The database will include 3D models of objects as well as grasping candidate points for grasping objects. 3D models can be rendered on the cloud side, and RGB images, depth images, and pointcloud data can be obtained.
【Keywords】cloud database, object search
We are tackling the following two problems in teaching the robot motions. (1) It is difficult to adapt to dynamical environmental changes only with the trajectory teaching. (2) A huge programming cost. Our approach called deep predictive learning includes (1) a motion model with high environmental adaptability using deep learning model trained end-to-end manner to acquire a forward-inverse models with visual, haptic information, etc. and (2) imitation learning using intuitive interface like direct teaching etc. With this predictive predictive learning, robots are expected to be able to conduct various tasks.
【Keywords】Deep learning, Imitation learning, Robot supporting for various daily tasks
As the amount of scientific literature is so huge, the speed and quality in the curation of the database for life sciences (DB), such as enzyme reactions and signaling pathways, depend on the expertise and English abilities of the curators. To lessen the dependencies, excellent text mining systems will be necessary to support the tasks of curators. To support such DB construction, text mining techniques will be enhanced to develop a system to extract structured information on life phenomenon (biological events), such as enzyme reactions and protein-protein interactions, from literature.
【Keywords】Text mining, Curation of scientific literature, Event extraction
The behavior of deformable objects such as strings, paper, cloth under manipulation is hard to predict and approximate computationally. This is one of the main reasons why progress in robotic manipulation of deformable objects has been slow. This research develops technologies for enabling robotic performance of a variety of tasks involving deformable object manipulation, focusing on topics such as suitable knowledge representation, novel methods for recognition and manipulation planning, efficient learning from human demonstration, and skill acquisition by means of autonomous exploration. The purpose of this work is to enable acquisition of various manipulation abilities suited for deformable objects via simple learning processes, along with the visualization of these abilities.
【Keywords】Deformable object recognition and manipulation, Automatic generation of manipulation procedures, Skill learning
In order for AI to support our everyday life activities such as nursing and child care, it is necessary to be able to recognize and understand our everyday actions. In this project, we are developing such technology that transforms everyday scenes captured by an video camera into natural language sentences describing the scenes. It is also planned to develop multimodal long-term memory, and realize a natural language question-answering system that can answer the question about home events retroactively for several days.
【Keywords】action recognition, deep learning, caption generation from video, video question-answering
- Recognition of the functional attributes estimated from local shape of objects such as cup or spoon.
- Model-less recognition using Machine Learning (DNN) and ２D/３D data
- Recognized information can be utilized to generate robot motion parameters
- Over 220 everyday objects with 7 functional attribute labels
【Keywords】Functional attribute, Deep learning, Everyday objects, Affordance
Brain-like models inspired with dynamics of networks and physical models of brain-like artificial intelligence that assists individual decision making by supporting individual experiences and memories are developed. For models of hippocampus, amygdala and cerebral cortex, brain-like processing integrated circuit architectures with extremely low energy consumption operation based on time-domain analog calculation required to implement the models are established. The physical models that integrate these three parts in the brain are developed, and their feasibility is evaluated by applying the models to tasks in RoboCup @Home league.
【Keywords】physical model, time-domain analog calculation, integrated circuit, RoboCup @Home