We are pleased to announce the upcoming Seventh International Workshop on Symbolic-Neural Learning 2023 (SNL2023),
which promises to be an engaging and insightful event for all attendees.
The workshop covers a wide range of research topics associated with various data types,
including language, knowledge graph, database, logical operation, semantic representations, etc.
Here are the details of the workshop:
We would like to inform you that Ph.D. Domae Yukiyasu, our team leader, is serving as a member of the program committee for this workshop.
In addition, Ph.D. Floris Erich and Ph.D. Enrique Coronado, reserachers of our team, make the poster presentations. Here are the details of their presentations:
Presenter: Enrique Coronado
Session: Poter Session I (June 28, 15:30-16:30)
Title: Bridging Humans, Robots, and Computers using NEP+ tools
This work introduces a set of novel, user-friendly and cross-platform tools and interfaces designed to empower students, researchers,
and end-users and facilitate cross-disciplinary collaboration. Collectively known as the NEP+ framework, these tools have been designed with a human-centered perspective and are accessible to individuals across multiple fields, including but not limited to robotics. Therefore, unlike many state-of-the-art frameworks primarily focused on Linux, NEP+ tools are specifically developed to empower Windows and macOS users.
This inclusive approach could foster the democratization of technology development, enabling individuals to gain autonomy and potentially encouraging transdisciplinary collaboration.
- URL: https://enrique-coronado.gitbook.io/nep-docs/
Presenter: Floris Erich
Session: Poster Session II (June 29, 15:15-16:15)
Title: Depth Completion of Transparent Objects using Augmented Unpaired Dataset
We propose a technique for depth completion of transparent objects using augmented data captured directly from real environments with complicated geometry.
Using cyclic adversarial learning we train translators to convert between painted versions of the objects and their real transparent counterpart.
The translators are trained on unpaired data, hence datasets can be created rapidly and without any manual labeling.
Our technique does not make any assumptions about the geometry of the environment, unlike SOTA systems that assume easily observable occlusion and contact edges,
such as ClearGrasp. We show how our technique outperforms ClearGrasp in a dishwasher environment, in which occlusion and contact edges are difficult to observe.
We also show how the technique can be used to create an object manipulation application with a humanoid robot.