Benchmarking planners is a time consuming process, so instead of waiting on the web page to see the results, users submit jobs and are notified when the results become available. This process is handled by Celery, an asynchronous task queue implemented in Python. When a benchmarking job is submitted, it is given to a Celery worker to execute, freeing up the Flask server to continue to serve other requests. Using Celery for benchmarking was fairly simple to setup since the client doesn’t need any information from the completed job.
However, solving a motion planning problem can also take a large amount of time; solving them synchronously would be impractical.
The problem with making regular motion planning asynchronous is that the user waits for the solution. For asynchronous solving to work, there must be some amount of coordination between the client and server to ensure that the right results are returned to the right user.
To solve this problem, I came up with the pattern seen above. When the user submits a motion planning problem, the task is assigned to a Celery worker and the ID of that task is immediately returned to the client. Then, the client will periodically poll the server for the results.
@app.route('/omplapp/poll/<task_id>', methods=['POST']) def poll(task_id): """ Checks if the task corresponding to the input ID has completed. If the task is done solving, the solution is returned. """ result = solve.AsyncResult(task_id) if result.ready(): return json.dumps(result.get()), 200 else : return "Result for task id: " + task_id + " isn't ready yet.", 202
With problem configuration and visualization implemented, the next big feature is benchmarking.
Benchmarking is an important feature of OMPL Web that does not exist in its desktop counterpart. Since each planner has a unique way of finding solutions, different planners may be better suited for different types of problems. For this reason, it is important to know when to use a specific planner. By benchmarking multiple planners on a given problem, we can compare each planner’s performance across various metrics such as time, memory, solution length, etc.
Furthermore, since the planners are all sampling-based, the problem can be solved repeatedly to get consistent data.
The benchmarking functionality of OMPL Web allows a user to
solve any given problem
n times with multiple planners and various planner
configurations. On the
Benchmarking page, the user can add and configure the
planners to be benchmarked. Then, they can submit a job to the server
and be notified via email when the job completes.
When the user submits a job, a
.cfg file is created on the client-side
containing all of the problem configuration details and benchmarking settings.
This information is sent to the server for processing. The server processes this
data using the benchmarking scripts included in OMPL and creates a SQLite
database of results. Currently, this (very small) database is emailed to the
user. This database file can be uploaded to Planner
Arena for an interactive visualization of the
In the future, the benchmarking results will be made available to the user directly on Planner Arena without the need for the user to obtain the database file.
Once a motion planning problem has been solved, the user is presented with several options for visualizing the solution. Initially, a line is displayed indicating the path from the start position to the goal position. The user can then toggle an animation of the robot and adjust its speed as it travels from its starting position to the goal. In addition, a static visualization consisting of robots placed at points along the solution path is also available.
The user can interact with the visualiztion using the mouse. Left-clicking and dragging will rotate the environment while right-clicking and dragging will pan. The visualization was constructed with WebGL using the THREE.js library.
Significant progress has been made in the past week or so and the application looks and behaves very differently than before. Furthermore, the user interface has been completely overhauled to accommodate new features and look cleaner.
The biggest change in terms of the interface is the addition of the visualization to the configuration page. Now, the robot’s position will be updated to reflect any changes the user makes in the configuration panel. When the problem has been configured to the user’s liking and solved, the solution path (if one was found) will be drawn and animated.
If the user selects an existing problem from the
Problem drop down menu, a
request is sent to the server to retrieve that problem’s configuration
information and its robot and environment models. The configuration fields are
then filled in with the information from the server and the models are loaded and
drawn. The screenshot above is state of the interface when an existing problem has
If the user selects a custom problem from the
down menu, then the sidebar will expand to reveal options for uploading custom
robot and environment models and, optionally, a configuration file. Once these
files have been selected, the will be uploaded to the server and drawn, as seen
in the screenshot below.
Planner tab allows the user to select which planner to use for solving
the problem. When a planner is selected, the default values for that planner’s
parameters are retrieved from the server and displayed. The user can edit these
options if they wish. The
Bounds tab allows the user to modify the bounding
box for the problem, if they are dissatisfied with the default values for that
Solve tab, the user can name the problem and specify a time
limit for solving the problem. If all the required fields are satisfied, they
can click the solve button and await the results. At this point the problem is
sent to the server for processing. When the results are returned, the
information is displayed within the pane. If a successful solution path was
found, it will be drawn and the robot will be animated traveling the path. This
animation is still in development.
This summer, I am working in the Kavraki Lab research group at Rice University. My main project this summer will be to create a web application that will allow users to solve motion planning problems for robots using the Open Motion Planning Library.
An existing application, called OMPL.app, is available for desktop clients but it involves a fairly lengthy installation process. The purpose of the web application is to provide all of the features of the desktop application and then some. Users will be able to configure a motion planning problem and solve for a solution path. Upon success, the solution will be displayed and the user will be able to interact with the path. The functionality described thus far already exists in OMPL.app, but the web application will have several key improvements.
One of primary motivations for creating the web version is to drastically reduce the time it takes to go from being interested in motion planning to actually creating and solving motion planning problems. The following steps must be carried out before solving a motion planning problem:
- Download the OMPL library and its dependencies
- Install all dependencises
- Build OMPL
- Generate Python bindings
- Run OMPL.app
- Open the website
OMPL.app can take hours to setup; the web implementation will take mere seconds.
The other important improvement will be planner benchmarking. OMPL takes a sample-based approach to motion planning and there exist many planners for utilizing different sampling methods. In addition to the dozen or so planners that are available within OMPL, users can create their own planners and use those instead. Due to the significant variation in how planners operate, selecting a particular planner may yield better results for a specific query than some other planner. For this reason it is important to have the ability to compare the results of different planners for a given problem. Currently, planners can be compared on the Planner Arena website.
Once a user is satisfied with their problem configuration on the OMPL Web application, they will be able to submit a job to the server for benchmarking. They can specify benchmarking parameters such as the number of iterations to run each planner, time limits, etc. When the job is completed, the user will be notified and their results will be available on Planner Arena for analysis.
In the next development blog entry, I will discuss the work that I’ve completed so far and the high level plan for the remaining development.
Ever since I started using Vim regularly, I find myself looking for ways to do more things from the command line. So, today I set out to write a basic command line program for viewing the weather. W is a python script that displays weather data for a given location and timeframe. I’m sure there are many like it, but this one is mine.
The script is invoked like this:
$ ./w.py <city> <option>
<option> can be:
today displays today’s weather
tomorrow displays tomorrow’s weather
week displays a five day forecast
sun displays sunrise and sunset times
wind displays current wind conditions
Take a look at the project page for details on installation and usage.