2015 IEEE/RSJ International Conference on Intelligent Robots and Systems

7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles

Full Day Workshop, Room F

Registration , Program , Proceedings

September 28th, 2015, Hamburg, Germany

Contact : Professor Philippe Martinet
IRCCyN-CNRS Laboratory, Ecole Centrale de Nantes,
1 rue de la Noë
44321 Nantes Cedex 03, France
Phone: +33 237406975, Fax: +33 237406934,
Email: Philippe.Martinet@irccyn.ec-nantes.fr,
Home page: http://www.irccyn.ec-nantes.fr/~martinet


Professor Philippe Martinet, IRCCyN-CNRS Laboratory, Ecole Centrale de Nantes, 1 rue de la Noë, 44321 Nantes Cedex 03, France, Phone: +33 237406975, Fax: +33 237406930, Email: Philippe.Martinet@irccyn.ec-nantes.fr,
Home page: http://www.irccyn.ec-nantes.fr/~martinet

Research Director Christian Laugier, INRIA, Emotion project, INRIA Rhône-Alpes, 655 Avenue de l'Europe, 38334 Saint Ismier Cedex, France, Phone: +33 4 7661 5222, Fax : +33 4 7661 5477, Email: Christian.Laugier@inrialpes.fr,
Home page: http://emotion.inrialpes.fr/laugier

Professor Urbano Nunes, Department of Electrical and Computer Engineering of the Faculty of Sciences and Technology of University of Coimbra, 3030-290 Coimbra, Portugal, GABINETE 3A.10, Phone: +351 239 796 287, Fax: +351 239 406 672, Email: urbano@deec.uc.pt,
Home page: http://www.isr.uc.pt/~urbano

Professor Christoph Stiller, , Institut für Mess- und Regelungstechnik, Karlsruher Institut für Technologie (KIT), Engler-Bunte-Ring 21, Gebäude: 40.32, 76131 Karlsruhe, Germany, Phone: +49 721 608-42325 Fax: +49 721 661874, Email: stiller@kit edu
Home page: http://www.mrt.kit.edu/mitarbeiter_stiller.php

General Scope

The purpose of this workshop is to discuss topics related to the challenging problems of autonomous navigation and of driving assistance in open and dynamic environments. Technologies related to application fields such as unmanned outdoor vehicles or intelligent road vehicles will be considered from both the theoretical and technological point of views. Several research questions located on the cutting edge of the state of the art will be addressed. Among the many application areas that robotics is addressing, transportation of people and goods seem to be a domain that will dramatically benefit from intelligent automation. Fully automatic driving is emerging as the approach to dramatically improve efficiency while at the same time leading to the goal of zero fatalities. This workshop will address robotics technologies, which are at the very core of this major shift in the automobile paradigm. Technologies related to this area, such as autonomous outdoor vehicles, achievements, challenges and open questions would be presented.

Main Topics

  • Road scene understanding
  • Lane detection and lane keeping
  • Pedestrian and vehicle detection
  • Detection, tracking and classification
  • Feature extraction and feature selection
  • Cooperative techniques
  • Collision prediction and avoidance
  • Advanced driver assistance systems
  • Environment perception, vehicle localization and autonomous navigation
  • Real-time perception and sensor fusion
  • SLAM in dynamic environments
  • Mapping and maps for navigation
  • Real-time motion planning in dynamic environments
  • Human-Robot Interaction
  • Behavior modeling and learning
  • Robust sensor-based 3D reconstruction
  • Modeling and Control of mobile robot
  • International Program Committee

  • Alberto Broggi (VisLab, Parma University, Italy)
  • Philippe Bonnifait (Heudiasyc, UTC, France)
  • Salvador Dominguez Quijada (IRCCyN, Ecole Centrale de Nantes, France)
  • Zhencheng Hu, (Kumamoto University, Japan)
  • Christian Laugier (Emotion, INRIA, France)
  • Philippe Martinet (IRCCyN, Ecole Centrale de Nantes, France)
  • Urbano Nunes (Coimbra University, Portugal),
  • Cedric Pradalier (GeorgiaTech Lorraine, France)
  • Cyril Stachniss (AIS, University of Freiburg, Germany)
  • Christoph Stiller (Karlsruhe Institute of Technology, Germany)
  • Benoit Thuilot (Blaise Pascal University, France)
  • Rafael Toledo Moreo (Universidad Politécnica de Cartagena, Spain)
  • Sebastian Thrun (Stanford University, USA)
  • Ming Yang (SJTU Shanghai, China)
  • Dizan Vasquez (INRIA, France)
  • Young-Woo SEO (CMU, USA)
  • Wang Han (NTU, Singapore)
  • Final program

    Introduction to the workshop 8:20

    Session I: Localization & mapping

    • Title: Autonomous Integrity Monitoring of Navigation Maps on board Vehicles 8:30
      Keynote speaker: Philippe Bonnifait (Heudiasyc, France) 35min + 5min questions

      Presentation, Paper

      Abstract: This talk addresses the problem of monitoring navigation systems on board passenger vehicles by using a Fault Detection, Isolation, and Adaptation (FDIA) paradigm. The aim is to prevent malfunctions in systems such as advanced driving assistance systems and autonomous driving functions that use data provided by the navigation system. The integrity of the estimation of the vehicle position provided by the navigation system is continuously monitored and assessed. The proposed approach uses an additional estimate of vehicle position that is independent of the navigation system and based on data from standard vehicle sensors. First, fault detection consists in comparing the two estimates using a sequential statistical test to detect discrepancies despite the presence of noise. Second, fault isolation and adaptation is introduced to identify faulty estimates and to provide a correction where necessary. The FDIA framework presented here utilizes repeated trips along the same roads as a source of redundancy. Relevant properties of this formalism are given and verified experimentally using an equipped vehicle in rural and urban conditions and with various map faults. Real results show that sequential FDIA performed well, even in difficult GNSS conditions.

    • Title: Collaborative Visual SLAM Framework for a Multi-Robot System 9:10
      Authors: Nived Chebrolu, David Marquez-Gamez and Philippe Martinet 17min + 3min questions

      Presentation, Paper

      Abstract:This paper presents a framework for collaborative visual SLAM using monocular cameras for a team of mobile robots. The robots perform SLAM individually using their on-board processors thereby estimating the seven degrees of freedom (including scale) for the motion of the camera and creating a map of the environment as a pose-graph of keyframes. Each robot communicates to a central server by sending local keyframe information. The central server merges them when a visual overlap is detected in the scene and creates a global map. In the background, the global map is continuously optimized using bundle adjustment techniques and the updated pose information is communicated back as feedback to the individual robots. We present some preliminary experimental results towards testing the framework with two mobile robots in an indoor environment.

    • Title: Improving Vision-based Topological Localization by Combining Local and Global Image Features 9:30
      Authors: Shuai Yang and Han Wang 17min + 3min questions

      Presentation, Paper, video

      Abstract: Vision-based mobile robot localization method has become popular in recent years. Most of the current approaches use either global or local image feature for appearance representation. However, pure global image feature based method suffers from its low invariance to occlusions and view point changes. Meanwhile, pure local image feature based method ignores the global context of the image, which is very important in place representation. In this paper, we present a vision-based robot localization system that uses both local and global image features to represent locations, and the estimated location of the robot is determined by a Bayesian tracking filter which uses the proposed hybrid image representation. Experiments were conducted using a 1200 m campus route and the results show that the proposed method works reliably even with large appearance changes.

    • Title: PML-SLAM: a solution for localization in large-scale urban environments 9:50
      Authors: Z. Alsayed, G. Bresson, F. Nashashibi, A. Verroust Blondet 17min + 3min questions

      Presentation, Paper

      Abstract: Localization is considered a key factor for autonomous cars. In this paper, we present a complete Simultaneous Localization And Mapping (SLAM) solution. This algorithm is based on probabilistic maximum likelihood framework using grid maps (the map is simply presented as a grid of occupancy probabilities). The solution mainly solve three renowned localization problems (1. localization in unknown environment, 2. localization in a pre-mapped environment and 3. recovering the localization of the vehicle). Memory issues caused by the open size of outdoor environment are solved using an optimized management strategy that we propose. This strategy allows us to navigate smoothly while saving and loading probabilities-grid submaps into/from a hard-disc in a transparent way. We present the results of our solution using our own experimental dataset as well as the KITTI dataset.

    Coffee Break 10:10

    Session II: Perception and Situation Awareness 10:30
    • Title: Perception for automated and assisted driving 10:30
      Keynote speaker: Dr.-Ing. Michael Darms (Volkswagen Aktiengesellschaft, Group Research, Germany) 35min + 5min questions


      Abstract: The talk gives an overview on the design of perception systems for automated and assisted driving. It compares the requirements of the two different domains and discusses the challenge of having an application dependent situation awareness layer with an application independent perception layer. One focus of the talk is on the task of deriving information about the location of the road and lanes from sensor data which is still a key challenge for automated and assisted driving. It is discussed how methods stemming from the field of neural networks can be applied and how a priori information stemming from maps can be used in the data fusion process. Finally the talk will give an overview of the perception system implemented in Jack, an automated vehicle with which Audi and VW Group Research demonstrated the maturity of the current stage of development of automatic driving. The vehicle did a piloted drive over two days and 550 miles under real conditions on a highway from the San Francisco Bay Area to Las Vegas. An outlook is given on how such a perception system can be integrated in a modular and scalable architecture and which approaches are thinkable for testing such a system.

    • Title: Free-space Detection using Online Disparity-supervised Color Modeling 11:10
      Authors: Willem P. Sanberg, Gijs Dubbelman and Peter H.N. de With 17min + 3min questions

      Presentation, Paper

      Abstract: This work contributes to vision processing for intelligent vehicle applications with an emphasis on Advanced Driver Assistance Systems (ADAS). A key issue for ADAS is the robust and efficient detection of free drivable space in front of the vehicle. To this end, we propose a stixel-based probabilistic color-segmentation algorithm to distinguish the ground surface from obstacles in traffic scenes. Our system learns color appearance models for free-space and obstacle classes in an online and self-supervised fashion. To this end, it applies a disparity-based segmentation, which can run in the background of the critical system path and at a lower frame rate than the color-based algorithm. This strategy enables an algorithm without a real-time disparity estimate. As a consequence, the current road scene can be analyzed without the extra latency of disparity estimation. This feature translates into a reduced response time from data acquisition to data analysis, which is a critical property for high-speed ADAS. Our evaluation over different color modeling strategies on publicly available data shows that the color-based analysis can achieve similar (77.6% vs. 77.3% correct) or even better results (4.3% less missed obstacle-area) in difficult imaging conditions, compared to a state-of-the-art disparity-only method.

    • Title: Vision-Based Road Detection using Contextual Blocks 11:30
      Authors: Caio Cesar Teodoro Mendes, Vincent Frémont and Denis Fernando Wolf 17min + 3min questions

      Presentation, Paper, video

      Abstract: Road detection is a fundamental task in autonomous navigation systems. In this paper, we consider the case of monocular road detection, where images are segmented into road and non-road regions. Our starting point is the well-known machine learning approach, in which a classifier is trained to distinguish road and non-road regions based on hand-labeled images. We proceed by introducing the use of “contextual blocks” as an efficient way of providing contextual information to the classifier. Overall, the proposed methodology, including its image feature selection and classifier, was conceived with computational cost in mind, leaving room for optimized implementations. Regarding experiments, we perform a sensible evaluation of each phase and feature subset that composes our system. The results show a great benefit from using contextual blocks and demonstrate their computational efficiency. Finally, we submit our results to the KITTI road detection benchmark achieving scores comparable with state of the art methods.

    • Title: Following Dirt Roads at Night-Time: Sensors and Features for Lane Recognition and Tracking 11:50
      Authors: Sebastian F. X. Bayerl, Thorsten Luettel and Hans-Joachim Wuensche 17min + 3min questions

      Presentation, Paper, video

      Abstract: The robust perception of roads is a major prerequisite in many Advanced Driver Assistant Systems such as Lane DepartureWarning and Lane Keeping Assistant Systems. While road detection at day-time is a well-known topic in literature, few publications provide a detailed description about handling the lack of day-light. In this paper we present multiple sensors and features for perceiving roads at day and night. The presented features are evaluated according to their quality for road detection. We generated a large number of labeled sample data and extracted the quality of the features from their probability distributions. The challenge of tracking an unmarked road under bad lighting conditions is demonstrated by comparing receiver operating characteristics (ROC) of the features at day and night-time. Based on these results we present a road tracking system capable of tracking unmarked roads of lower order regardless of illumination conditions. Practical tests prove the robustness up to unmarked dirt roads under different weather conditions.

    Lunch break 12:10

    Session III: Interactive session 13:30
    • Title: Generating Compact Models for Traffic Scenarios to Estimate Driver Behavior Using Semantic Reasoning
      Authors: Ilya Dianov, Karinne Ramirez-Amaro and Gordon Cheng


      Abstract: Driving through a constantly changing environment is one of the main challenges of autonomous driving. To navigate successfully, the vehicle should be able to handle any potential situation on the road by constantly analyzing the traffic environment and determine what objects might influence its current behavior. This paper presents an artificial intelligence method to improve the perception and situation awareness of autonomous vehicles by detecting and extracting meaningful information about different traffic scenarios and inferring correct driving behavior for each of these scenarios. Our method uses a state of the art technique based on semantic reasoning previously used for recognizing human activities in cooking scenarios. This algorithm has been adapted and extended to the automotive domain by introducing common object properties such and ObjectInFront, ObjectActedOn, MoveForward, Turn, etc. The main advantage of our proposed methodology is the fact that it is applicable to different mobile domains without any additional training. In other words, our system is first trained on traffic situations and the obtained semantic models are later used to successfully allow a mobile robot to navigate autonomously in an indoor environment by utilizing knowledge and inference. The results show that the overall classification rate for traffic scenarios recognition is 90.14% of the time. Additionally, the average processing and behavior generation time for the implemented system is 0.177 seconds, which allows the mobile robot to react relatively quickly to encountered new situations.

    • Title: 16 channels Velodyne versus planar LiDARs based perception system for Large Scale 2D-SLAM
      Authors: Nobili S. Dominguez S. Garcia G. Martinet P.


      Abstract: The ability of self-localization is a basic requirement for an autonomous vehicle, and a prior reconstruction of the environment is usually needed. This paper analyses the performances of two different hardware architectures to treat the problem of 2D Simultaneous Localization and Mapping (2DSLAM) in large scale scenarios. The choice of the perception system plays a vital role for building a reliable and simple architecture for SLAM. Therefore we analyse two common configurations: one based on three planar LiDARs Sick LMS151 and the other based on a Velodyne 3D LiDAR VLP-16. For each of the architectures we identify advantages and drawbacks related to system installation, calibration complexity, reliability and effectiveness of the data for localization purposes. The conclusions obtained tip the balance to the side of using a Velodyne-like sensor facilitating the process of hardware implementation, keeping a lower cost and without compromising the accuracy of the localization. From the point of view of perception, additional advantages arise from the fact of having 3D information available on the system for other purposes.

    • Title: Discriminative Map Matching Using View Dependent Map Descriptor
      Authors: Liu Enfu Tanaka Kanji


      Abstract: The problem of matching a local occupancy grid map built by a mobile robot to previously built maps is crucial for autonomous navigation in both indoor and outdoor environments. In this paper, the map matching problem is addressed from a novel perspective, which is different from the classic bag-of-words (BoW) paradigm. Unlike previous BoW approaches that trade discriminativity for viewpoint invariance, we develop a local map descriptor that is viewdependent and highly discriminative. Our method consists of three distinct steps: (1) First, an informative local map of the robot’s local surroundings is built. (2) Next, a unique viewpoint is planned in accordance with the given local map. (3) Finally, a synthetic view is described at the designated viewpoint. Because the success of our local map descriptor (LMD) depends on the assumption that the viewpoint is unique given a local map, we also address the issue of viewpoint planning and present a solution that provides similar views for similar local maps. Consequently, we also propose a practical map-matching framework that combines the advantages of the fast succinct bag-of-words technique and the highly discriminative LMD descriptor. The results of experiments conducted verify the efficacy of our proposed approach.

    • Title: Path Planning and Steering Control for an Automatic Perpendicular Parking Assist System
      Authors: Plamen Petrov Fawzi Nashashibi and Mohamed Marouf


      Abstract: This paper considers the perpendicular reverse parking problem of front wheel steering vehicles. Relationships between the widths of the parking aisle and the parking place, as well as the parameters and initial position of the vehicle for planning a collision-free reverse perpendicular parking in one maneuver are first presented. Two types of steering controllers (bang-bang and saturated tanh-type controllers) for straight-line tracking are proposed and evaluated. We demonstrate that the saturated controller, which is continuous, achieves also quick steering avoiding chattering and can be successfully used in solving parking problems. Simulation results which confirm the effectiveness of the proposed control scheme are presented.

    • Title: Obstacle segmentation with low-density disparity maps
      Authors: Daniela A. Ridel, Patrick Y. Shinzato and Denis F. Wolf1


      Abstract: The detection of obstacles is a fundamental issue in autonomous navigation, as it is the main key for collision prevention. This paper presents a method for the segmentation of general obstacles by stereo vision with no need of dense disparity maps or assumptions about the scenario. A sparse set of points is selected according to a local spatial condition and then clustered in function of its neighborhood, disparity values and a cost associated with the possibility of each point being part of an obstacle. The method was evaluated in handlabeled images from KITTI object detection benchmark and the precision and recall metrics were calculated. The quantitative and qualitative results showed satisfactory in scenarios with different types of objects.

    Session IV: Planning & Navigation 14:30
    • Title: Determining the Nonexistence of Evasive Trajectories for Collision Avoidance Systems 14:30
      Keynote speaker: Pr Matthias Althoff (Technische Universität München, Germany) 35min + 5min questions


      Abstract: It is of utmost importance for automatic collision avoidance systems to correctly evaluate the risk of a current situation and constantly decide, if and what kind of evasive maneuver must be initiated. Most motion planning algorithms find such maneuver by searching a deterministic or random subset of the state space or input space. These approaches can be designed to be complete in the sense that they converge to a feasible solution as sampling is made denser. However, they are not suitable to determine whether a solution exists. In this paper, we present an approach which overapproximates the reachable set of the host vehicle considering workspace obstacles. Thus, it provides an upper bound of the solution set and it can report if no solution exists. Furthermore, the calculated set can be used for guiding the search of an underlying planning algorithm to find a solution as each trajectory of the host vehicle is ensured to be contained within this set.

    • Title: Safe prediction-based local path planning using obstacle probability sections 15:10
      Authors: Tanja Hebecker and Frank Ortmeier 17min + 3min questions

      Presentation, Paper

      Abstract: Autonomous mobile robots gain more and more importance. In the nearest future they will be a part of everyday life. Therefore, it is critical to make them as reliable and safe as possible. We present a local path planner that shall ensure safety in an environment cluttered with unexpectedly moving obstacles. In this paper, the motion of obstacles is predicted by generating probability sections, and collision risks of path configurations are checked by determining whether these configurations lead inevitably to a collision or not. The presented approach worked efficiently in scenarios with static and dynamic obstacles.

    • Title: Improving Monte Carlo Localization using Reflective Markers: An Experimental Analysis 15:30
      Authors: Francesco Leofante, Gwénolé Le Moal, Gaëtan Garcia , Patrice Rabaté 17min + 3min questions

      Presentation, Paper

      Abstract: Robust localization is a basic requirement for many applications in mobile robotics. Despite many techniques have been devised to find solutions to the localization problem, symmetrical or featureless environments represent a great challenge for most of the commonly used approaches. In this paper, we investigate how artificial landmarks can be used to reduce the chances of failure of the localization process. More specifically, an experimental analysis of a probabilistic sensor model designed to employ reflective markers is presented. The analysis will focus on two central parameters of the model, several experiments are carried out to evaluate their effect on the overall localization process.

    Coffee break 15:50

    Session V: Sensing 16:20
    • Title: Embedded Bayesian Perception & Risk Assessment for ADAS & Autonomous Cars 16:20
      Keynote speaker: Christian Laugier (INRIA, France) 35min + 5min questions

      Presentation, video

      Abstract: This talk addresses both the socio-economic and technical issues which are behind the development of the next generation of cars. These future cars will both include enhanced Advanced Driving Assistance Systems and Driverless Car functionalities. In the talk, new Bayesian approaches for Autonomous Vehicles will be presented, with an emphasis on Situation Awareness, Collision Risk Assessment, and Decision-making for safe navigation and maneuvering. It will be shown that Bayesian approaches are mandatory for developing such technologies and for obtaining the required robustness in presence of uncertainty and complex traffic situations. Results obtained in cooperation with Toyota and with Renault will also been presented.

    • Title: 360 degre 3d ground surface reconstruction using a single rotating camera 17:00
      Authors: Motooka Sugimoto, Okotumi, Shima 17min + 3min questions

      Presentation, Paper

      Abstract: We propose a method for reconstructing 360 degree 3D ground surfaces in high precision from images captured by a single rotating camera, assuming that the camera is mounted at an off-centered position on a construction equipment whose upper body is rotatable around a single axis (e.g. power shovel). We estimate a regular-grid ground surface, whose coordinate system is determined from the camera positions estimated by a standard structure from motion technique. To produce high quality ground surfaces, we first initialize the ground surface by fitting to the 3D points from SFM, then we minimize the variance of pixels values over the whole ground surface, where all contributable pixels are treated equally appear due to self-shadows and lens-flares under shiny weather conditions. The validity of the proposed method is demonstrated through experiments using synthetic and real images.

    • Title: Towards Characterizing the Behavior of LiDARs in Snowy Conditions 17:20
      Authors: Sebastien Michaud Jean-Francois Lalonde, and Philippe Giguere 17min + 3min questions

      Presentation, Paper

      Abstract: Autonomous driving vehicles must be able to handle difficult weather conditions in order to gain acceptance. For example, challenging situations such as falling snow could significantly affect the performance of vision or LiDARbased perception systems. In this paper, we are interested in characterizing the behavior of LiDARs in snowy conditions, as there seems to be little information publicly available. In particular, we present a characterization of the behavior of 4 commonly-used LiDARs (Velodyne HDL-32E, SICK LMS151, SICK LMS200 and Hokuyo UTM-30LX-EW) during the falling snow condition. Data was collected from the 4 sensors simultaneously during 6 snowfalls. Statistical analysis of these datasets indicates that these sensors can be modeled in a probabilistic manner, allowing the use of a Bayesian framework to improve robustness. Moreover, we were able to observe the temporal evolution of the impact of the falling snow during these snowstorms, and characterize the sensitivity of each device. Finally, we concluded that the falling snow had little impact beyond a range of 10m.

    Closing 17:40
    Author Information

      Format of the paper: Papers should be prepared according to the IROS15 final camera ready format and should be 4 to 6 pages long. The detailed information on the paper format is available from the IROS15 page. http://www.iros2015.org/index.php/contributing/paper-instructions. Papers must be sent to Philippe Martinet by email at Philippe.Martinet@irccyn.ec-nantes.fr

      Important dates (preliminary)

      • Deadline for Paper submission: July 1 st, 2015
      • Acceptance with review comments: July 21th, 2015
      • Deadline for final paper submission: August 20th, 12am at last, 2015

      Talk information

      • Invited talk: 40 min (35 min talk, 5 min question)
      • Other talk: 20 min (17 min talk, 3 min question)

      Interactive session

      • Interactive and open session: 1h00

    Previous workshops

      Previously, several workshops were organized in the near same field. The 1st edition PPNIV'07 of this workshop was held in Roma during ICRA'07 (around 60 attendees), the second PPNIV'08 was in Nice during IROS'08 (more than 90 registered people), the third PPNIV'09 was in Saint-Louis (around 70 attendees) during IROS'09, the fourth edition PPNIV'12 was in Vilamoura (over 95 attendees) during IROS'12, the fifth edition PPNIV'13 was in Tokyo (over 135 attendees) during IROS'13, and the sixth edition PPNIV'14 was in Chicago (over 100 attendees) during IROS'14.
      In parallel, we have also organized SNODE'07 in San Diego during IROS'07 (around 80 attendees), SNODE'09 in Kobe during ICRA'09 (around 70 attendees), RITS'10 in Anchrorage during ICRA'10 (around 35 attendees), and the last one PNAVHE11 in San Francisco during the last IROS11 (around 50 attendees), and MEPC'14 was in Hong-Kong during ICRA'14 (over 60 attendees).

      Special issues have been published in IEEE Transaction on ITS (Car and ITS applications, September 2009), in IEEE-RAS Magazine (Perception and Navigation for Autonomous Vehicles, March 2014) and in ITS Magazine (Perception and Navigation for Autonomous Vehicles, March 2015). We also plan to prepare a special issue in the International Journal of Robotic Research (IJRR).


      Proceedings: The workshop proceedings will be published within the IROS Workshop/Tutorial CDROM and electronically as a pdf file.

      Special issue: Selected papers will be considered for a special issue in the International Journal of Robotic Research (IJRR). We will issue an open call, submissions will go through a separate peer review process.