Route Planning in Transportation Networks

We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs.

SYNOPSIS

This website contains additional information about our recent survey article on route planning in transportation networks.

We especially encourage authors of future research on route planning to download and use our benchmark tool for scaling their running times in order to make them comparable to previous work. The benchmark scores we used in our survey can be found here.

We also provide an errata for mistakes and errors discovered in the survey after publication.

A preprint of the survey article is available from arXiv.

THE BENCHMARK

For the purpose of comparing running times of different algorithms, we provide a small benchmark tool. It has been used to scale the running times for the road network experiments in the survey. The tool downloads the USA road network graph from the 9th DIMACS implementation challenge website and runs a number of one-to-all shortest path queries on it. It outputs the median (over 9 repetitions) running time of 10 queries.

We encourage future works to make use of the benchmark. For that purpose, we provide the benchmark scores for the machines used in the papers for many of the important algorithms.

Downloading and Running

We provide the benchmark as C++ source code, distributed under the MIT license, copyright © Microsoft Corporation. By default, please pick “latest version.”

Before compiling and running the benchmark, make sure that you have the following packages installed on your system:

On Linux they should be available by default, but if you are running, e.g., Cygwin, you may need to install them manually via the setup utility.

To install the benchmark, first unzip the downloaded file to an arbitrary location, open a terminal window and go to the location you put the benchmark to. From there simply call

make

This will produce a binary called sp_benchmark in the root directory of the benchmark. To download the test graph (the 9th DIMACS Challenge USA road network graph with travel time metric), simply type

make data

Note that this may take several minutes, depending on your Internet bandwidth. The downloaded file size is approximately 342 MB. Once the download finished, the benchmark can be run by calling

./sp_benchmark

The benchmark will automatically parse the graph and then run 10 random queries. This experiment is rerun 9 times, and the score output in the end is the median running time of these 9 repetitions.

Advanced Parameters

The benchmark tool accepts some custom parameters, as follows:

./sp_benchmark <graph file> <#queries> <#repetitions>

The supplied graph file has to be in “DIMACS format.” So, to run the benchmark on a different graph with 50 queries repeated 20 times, you could call it with ./sp_benchmark data/my_graph.gr 50 20. However, for the purpose of comparing running times with previous research, please always just call ./sp_benchmark without any additional parameters.

Reference Scores

The following table lists the machines and their benchmark scores of the surveyed algorithms on road networks. While for some algorithms there certainly exist more publications (than the ones listed in the “Publications” column), the listed publications cover the most recent experimental results on the respective algorithms. We recommend using figures from these publications when comparing future work.

NameSpecificationScore [ms]AlgorithmsPublications
compute9AMD Opteron 2218101,552ALT, orig. Arc Flags, CALT, orig. CHASE, HH, HH*, HNR, HPML, Reach, ReachFlags, REAL, SHARC, orig. TNRDHMSW09, BDSSSW10
compute10Intel XEON E543051,036
compute11Intel XEON E5-267036,582CCHDSW14
SPA-2Intel XEON X568043,719Arc-Flags, CHASE, CRP, HL, HLC, PHAST, PUNCH, Table LookupADGW12, DGNW13, DGPW13, DGW13
SPA-3Intel Core-i7 92068,539CH-based TNRALS13
SPA-4Intel XEON E5-269043,759
CHOPTAMD Opteron 270117,946CH, TNR + Arc FlagsGSSV12
CUSTALTIntel Core-i734,846customizable ALTEP13

To compare your new algorithm to one of the algorithms from the table, run the benchmark on your machine, and divide its score by the respective score in the table. Use this factor for scaling the running time figures of the respective algorithm to your machine.

If you want your entry added to the table, drop us an email at tpajor [at] microsoft [dot] com.

ERRATA

Corrections to mistakes and errors in the article will be posted here, once they pop up.