Skip to the content.

Workshop Description

Deep Reinforcement Learning (RL) has been widely applied to many applications in various domains including computer games, language, vision, and real robot control (Mnih et al., 2015; Narasimhan et al., 2015; Yuan et al., 2018). In real-world applications, state of the art RL algorithms still face some challenges, including sample inefficiency, explainability of the learned policy, partial observability, dynamic environments, sparse rewards, and safety constraints. For example, these methods require many training trials to converge to the optimal action policy due to extremely large state spaces from the environment. Moreover, even if the algorithm converges, the trained action policy is not understandable to human operators because the policy is stored in a black-box deep neural network. These issues become critical when human operators want to verify the trained rules, control the trained agent, and to add action restrictions.

In order to address these issues, reinforcement learning methods which introduce symbolic representations and reasoning in deep complex network have been proposed in Dong et al. (2019); Anderson et al. (2020); Kimura et al. (2021); Chaudhury et al. (2021). Neural Logic Machine (NLM, Dong et al. (2019)) incorporates a neural-symbolic architecture for both inductive learning and logic reasoning, using tensors to represent logic predicates. Reinforcement Learning with Formally Verified Exploration (REVEL, Anderson et al. (2020)) has two policy classes: a general, neurosymbolic class with approximate gradients and a more restricted class of symbolic policies that allows efficient verification. Neuro-Symbolic Reinforcement Learning with First-Order Logic in LNN (FOL-LNN, Kimura et al. (2021)) proposes an algorithm for extracting first-order logical facts from text observation and external word meaning network, and trains a policy using logical neural network (LNN, Riegel et al. (2020)) with directly interpretable logical operators. SymboLic Action policy for Textual Environments (SLATE, Chaudhury et al. (2021)) learns interpretable action policy rules from symbolic abstractions of textual observations for improved generalization. These methods are generally called “Neuro-Symbolic Reinforcement Learning”, and they combine knowledge-driven symbolic reasoning and data-driven machine learning approaches.

We believe that incorporating symbolic representations and reasoning into deep learning can potentially solve many of the challenges facing action decision making and reinforcement learning. The primary goal of this workshop is to facilitate community building: we hope to bring researchers together to consolidate this line of research and foster collaboration in the community.

In this workshop, we will cover following challenges in decision making and RL:

Call for Paper

We welcome original research papers ranging between 4-8 pages in length (not including references or supplementary materials) which is formatted according to the IJCAI guidelines link. Reviews are double blind, so no identifying information should be on the papers.

Please submit your paper at here: Microsoft CMT site

We also welcome extended abstracts of up to 2 pages (not including references) that describe open problems and challenges in the area of Neuro-Symbolic Agent.

The papers will be non-archival, which means we welcome papers that have been published or submitted to other conferences and journals. However, authors are required to acknowledge their papers’ original appearance in such cases.

All accepted papers and extended abstracts will be presented as posters.

The program committee will select some papers for oral presentation. There will be a poster session during the scheduled coffee breaks to facilitate discussions among attendees.

Summary:

Important Dates

Accepted papers

Oral and Poster

Poster

Schedule

Time         Agenda          
8:50 Opening Ndivhuwo Makondo
9:00 Invited talk Alexander Gray, “Can Knowledge, Reasoning, and Learning be Smoothly Integrated? Toward Safe AI Agents”
10:00 Contributed talk Daiki Kimura, “Explainable Neuro-Symbolic Reinforcement Learning”
10:30 Coffee break  
11:00 Invited talk Luc De Raedt, “How to Make Logics Neurosymbolic”
12:00 Contributed talk Aleksandr Panov, “Object-Oriented Decomposition of World Model in Reinforcement Learning”
12:30 Lunch break  
14:00 Invited talk Wang-Zhou Dai, “Towards Openworld Abductive Learning”
15:00 Contributed talk Naman Shah, “Learning Neuro-Symbolic Abstraction for Motion Planning Under Uncertainty”
15:30 Coffee break  
16:00 Contributed talk Jaeil Park, “A Neuro-Symbolic Approach with Reinforcement Learning for Explainable Question Answering in Pedestrian Anomaly Video Sequence”
16:30 Contributed talk Daiki Kimura, “Introducing Trial-and-Error Exploration to Avoid Critical Failure for Efficient Reinforcement Learning”
16:45 Contributed talk Daiki Kimura, “Neuro-Symbolic Model-based RL with Logical Neural Network”
17:00 Poster All contributed papers

Invited Talks

Alexander Gray
IBM Thomas J. Watson Research Center
Luc De Raedt
KU Leuven
Wang-Zhou Dai
Nanjing University
* in alphabetical order

Organizers

Alessandra Russo
Imperial College London
Daiki Kimura
IBM Research - Tokyo
(primary contact)
Ndivhuwo Makondo
IBM Research - Africa
Steven James
University of the Witwatersrand
* in alphabetical order

Program Committee

* in alphabetical order

Reference