Virtual Reality for Everyone!Using the latest technology, we can safely hijack your most trusted senses, thereby fooling your brain into believing you are in another world. The virtual world could be a magical, graphical place or a live representation of a distant, physical space. What would you like to do there? Perhaps socialize with friends, relax alone, play games, educate yourself, improve your cognitive skills, write software, travel inside of your own body, and on and on. Virtual reality (VR) has been around for a long time, but due to the recent convergence of sensing, display, and computation technologies, there is an unprecedented opportunity to explore this form of human augmentation with lightweight, low-cost materials and simple software platforms. This is an intense form of human-computer interaction (HCI) that requires re-examining core engineering principles with a direct infusion of perceptual psychology research. Developing systems that optimize classical criteria might lead to overcomplicated solutions that are too slow or costly in practice, and yet could make no perceptible difference to users. Simple adaptation of techniques that were developed for on-screen viewing, such as cinematography and first-person shooter game play, often lead to unpleasent VR experiences due the presentation of unusual stimuli or due to mismatches between the human vestibular system and other senses. With the rapid rise in consumer VR, fundamental research questions are popping up everywhere, slicing across numerous disciplines from engineering to sociology to film to medicine. Come join in the fun; there is much work to do! For our paper on the original head tracking for the Oculus Rift, see this.
Wild BodiesMost people keep their robots under tight control, tracking their precise positions and making sure that they don't collide with anything. Not us! We let our robots run wild. For starters, take a weaselball (valued at $4 US) and remove the weasel. See how well it explores. Nice behavior, but we then want to coerce them into doing something useful; therefore, we divide their environment into a discrete set of regions and design gateways that act as turnstiles to gently guide their flow. It is sort of like a robotic version of Maxwell's demon. With a few pieces of paper and some bricks, we obtained one of the cheapest multi-robot coordination demos ever. We are designing various kinds of gates, including pliant, actuated, and even virtual gates that are made from simple detection beams. We also consider various robot platforms, including simple differential drive robots, Hexbug Nanos, and swimming weaselballs. We believe these systems will effectively solve problems such as exploration, coverage, searching, and patrolling, even though there is very little sensing, mechanical models, or environment models. In addition to building systems, we study deep questions closely related to dynamical billiards. The key property we want is wildness which means that a body will move on a trajectory that nontangentially strikes every open set along the boundary of whatever region it is placed. The particular system model is not important. See our paper that describes the methods involving physical gates. For virtual gates, see this paper. For more videos, go here and here. All of our wild bodies papers are here. Involved students: Leonardo Bobadilla, Oscar Sanchez, Katrina Gossman, Eric Gobst, Randal Nephew, Fredy Martinez.
Combinatorial FiltersFiltering or sensor fusion is the process of combining information from multiple sensor observations to obtain a coherent perspective. Filters are used for robot localization, map building, SLAM, target tracking, counting people, GPS, and so on. The vast majority of approaches focus on measuring as much information as possible to obtain complete, reliable estimates of the entire system state. Bayesian filtering is by far the most common family of filters, using probabilistic models to overcome process noise and disturbances. This leads to the development of powerful sensors and difficult modeling burdens to probabilistically account for uncertainties. Our approach to filtering is a radical departure from this trend. We instead "handle" uncertainty by developing simple filters that determine just enough information to solve the task at hand, thereby eliminating unneccesary sensing and modeling burdens. We consequently introduce a new filtering family, called combinatorial filters, for which sensors measure discrete, critical pieces of information and combine them into a simple filtering algorithm that in many cases looks like a small automaton. An overview of combinatorial filters appears in my IROS 2009 tutorial. Some representative works are shadow information spaces and sensor beams and obstacles. All of our combinatorial filtering papers are here. Involved students: Jingjin Yu, Leonardo Bobadilla, Oscar Sanchez, Yaonan Huang.
Feedback Motion PlanningMotion planning algorithms ordinarily produce a collision-free path, which is later tracked by some controller that uses feedback. It makes more sense to incorporate the feedback directly into the plan. This leads to feedback motion planning, in which the computed plan indicates which velocity should be acheived at every point in the collision-free portion of the configuration space (or phase space). Alternatively, a feedback plan may be encoded as a navigation function (Rimon, Koditschek, 1990; imagine Lyapunov functions or cost-to-go functions that account for obstacles). It is widely known in control theory that feedback is crucial for robustness; however, this idea has been slow to trickle into motion planning research. Whereas ordinary planning hands the path off to another module, a feedback plan is used directly during execution with sensor feedback. The plan could be the controller itself or one could design a control law that tracks the velocity field, rather than a fixed path. Just as in ordinary path planning, there are feasible and optimal versions of the feedback problem. For the feasible case, we need only to guarantee that the goal region is reached in finite time. For the optimal case, execution must be optimal in time, distance, energy, or some other criterion. Also as in ordinary path planning, there are both combinatorial (or exact) and sampling-based algorithms that generate plans. As an example of combinatorial feedback planning, see our recent paper, which shows how to simply compute smooth vector fields over cell decompositions. Most useful approaches are sampling-based. For the feasible case, see our paper on sampling-based neighborhood graphs or Russ Tedrake's paper from MIT on LQR trees. For the optimal case, we developed a general-purpose Dijkstra-like algorithm in this paper. More recently, we presented a paper with Dijkstra and A* algorithms for Euclidean shortest paths that work directly over a simplicial complex embedded in the free space. All of our feedback planning papers are here. Involved students: Dmitry Yershov, Max Katsev.
Landmark-Based Mapping and NavigationIn many autonomous systems, it is costly or impossible to get precise coordinates (yes, there are plenty of GPS-denied environments). Furthermore, exact geometric maps of environments might not even be necessary for navigation and other tasks. This project focuses on determining what can and cannot be accomplished by detecting simple landmarks in the environment and using them for navigation. For example, imagine commands such as "drive toward the landmark" or "drive between the two landmarks". In this paper and this paper, we ask the question: What can be learned about the geometry of the environment when the robot's sensor can only determine either the angular ordering of landmarks or the linear distance ordering of landmarks? By angular ordering, we mean that the sensor can only determine how landmarks are circularly sorted from the robot's viewpoint, with no precise distance or angle information. By linear ordering, we mean that we only know the ordering from closest to furthest of the landmarks at any time. One can learn interesting things like the convex hulls of subsets of landmarks and the Delaunay triangulation. The robot can then perform useful navigation and patrolling strategies. In this and related papers, we ask the question: How many landmark colors are sufficient to avoid navigation confusion in a complicated environment. We introduced a chromatic art gallery problem along the way. All of our landmark-based papers are here. Involved students: Lawrence (Lars) Erickson, Max Katsev.
Rapidly Exploring Random Trees (RRTs)Back in 1998, we introduced the RRT (in this Iowa State tech report), which is a simple, iterative algorithm that quickly searches complicated, high-dimensional spaces for feasible paths. The idea is to incrementally grow a space-filling tree by sampling the space at random and connecting the nearest point in the tree to the new random sample. This results in a simple way to achieve Voronoi bias, which causes aggressive exploration in the early stages and then eventually settles on uniform space coverage. For path planning applications, two trees are usually grown, one from each of the initial and goal regions, respectively. When they connect to each other, a solution is found (see this paper, this paper, and Section 5 of this chapter). Over the past decade, RRTs have been used in numerous robotic systems, including DARPA Urban Challenge vehicles, military UAVs, humanoid robots, soccer-playing robots, and plenetary explorer prototypes. They have also been applied outside of robotics, in areas such as computational biology and virtual protyping, and video games. Numerous extensions and variants of RRTs exist. Just look through the proceedings of any recent robotics conference. For example, in the ICRA 2011 proceedings alone, the term "RRT" appears 926 times. Implementations of RRTs appear in various libraries, including Willow Garages's Robot Operating System (ROS), the Open Motion Planning Library (OMPL), the Motion Strategy Library, MATLAB, RRT* library, and several commercial software packages. A simple Python program (using PyGame) that generates RRTs is here. See the RRT Page for more RRT examples and uses prior to 2003. For deterministic counterparts to RRTs, see Section 5 of this chapter and the IROS 2011 paper that I wrote with James Kuffner. All of our RRT-related papers are here.
Funding SourcesWe are very grateful for the support provided by the Toyota Motor Corporation (Japan), the UIUC Department of Computer Science, Honda, the National Science Foundation, the Office of Naval Research, and DARPA. Most of our funding is applied to the equipment, travel, and salaries of graduate students.
- Sensor Lattices, NSF Cyberphysical Systems, $500,000, 2011-2014, (PI: LaValle)
- Minimalist Mapping and Monitoring, NSF Robotics, $1.28 million, 2009-2013, (PIs: S. LaValle (UIUC), S. Suri (UCSB), F. Bullo (UCSB))
- Reasoning in Reduced Information Spaces, MURI/ONR, $6 million, 2009-2014, (PIs: D. Bagnell (CMU), D. Fox (U Washington), G. Gordon (CMU), S. LaValle (UIUC), M. Likhachev (U Penn), N. Roy (MIT), J. Shi (U Penn), A. Stentz (CMU).
(Sensor Topology for Minimalist Planning), DARPA, ~ $8 million,
2006-2012, (a program with 15 PIs, which grew from the DARPA seed
grant of R. Ghrist and S. LaValle). Current Lead PI: S. LaValle
- Previous Grants