This is the complete installation guide. If it looks too complex, please have a look at the quick installation guide. There is also an installation guide on the wiki which covers most errors that have ever been occured during installation and startup.
We cover here various base systems, in particular at least Ubuntu Linux and FreeBSD. We also cover different variants of installation and operation, including working with or without meta data, the XAPI wrapper, area creation, and the management of custom output.
With a POSIX confirming operating system (this includes all kinds of Linux as well as FreeBSD, OpenBSD and several others), you have already fulfilled most base requirements.
Concerning hardware, I suggest at least 1 GB of RAM. The more RAM is available, the better, because caching of disk content in the RAM will significantly speed up Overpass API. The processor speed will have little relevance. For the hard disk, it depends on what you want to install. A full planet database with minutely updates should have at least 150 GB of hard disk space at disposal. Without minute diffs and meta data, 50 GB would already suffice.
To automatically download diffs files, you need a command line download tool. I suggest wget. If it is not already installed, you can get it on e.g. Ubuntu with:
sudo apt-get install wget
Other useful programs could be curl or fetch (fetch is available by default on FreeBSD). To completely replace wget, you need to replace wget -O by curl -o in all installation instructions here and in each of the files src/bin/fetch_osc.sh, src/cgi-bin/ping, and src/cgi-bin/template inside the block fetch_file(). The same applied with fetch: In this case, replace wget -O by fetch -o.
To compile the software, you need a C++ compiler and make. I suggest the GCC collection. If it is not already installed, you can get it on e.g. Ubuntu with:
sudo apt-get install g++ make
To compile the software, you also need the expat library. If it is not already installed, you can get it on e.g. Ubuntu with:
sudo apt-get install expat libexpat1-dev
You can also include expat from sources; this way you don't need root permissions just to install expat: Download the latest tarball from the project's page. Expat itself is installed by unpacking, then configure; make; make install. To use these libraries, insert CPPFLAGS="-I/path/to/expat/include" and LDFLAGS="-static -L/path/to/expat/lib/" into the make command:
make CPPFLAGS="-I/path/to/expat/include" CXXFLAGS="-O3" LDFLAGS="-static -L/path/to/expat/lib/" install
where /path/to/expat must be replaced by the path that you have chosen in the configure step of expat. Note: If you need to supply more than one CPPFLAGS parameter this way, you should instead use one CPPFLAGS parameter which has both the arguments inside the quotation marks, separated by a blank.
You need to choose a directory where you put the executable files. You can later move them to a different directory. But the default choice of the installation program automake, /usr/bin, requires root permissions, although no root permissions are really necessary to run the program. I suggest the parent directory of the source code directory: this can be reached with "`pwd`/../". To configure this output directory and detect necessary adaptions of your system, run in the build subdirectory:
../src/configure --prefix="`pwd`/../"
Generate the executables:
make CXXFLAGS="-O3" install
Other system than Linux may require here some extra parameters. For example, FreeBSD needs -DNATIVE_LARGE_FILES, because it doesn't have a separate open64 function:
make CXXFLAGS="-O3" CPPFLAGS="-DNATIVE_LARGE_FILES" install
Since version 0.6.98, the database can be cloned from an exisiting instance rather than created from scratch. This only takes 4 to 8 hours in comparison to 24 to 48 hours for an update from the planet file. Note that this feature is still rather experimental - please report any problems by eMail to me (roland.olbricht at gmx.de). If you don't want the entire planet or prefer a manually planet import for some other reason, use the manual import instead.
Download a clone of the database at overpass-api.de with the command:
../src/bin/download_clone.sh --source=http://overpass-api.de/api/ --db-dir="../db/" --meta=no
or
nohup ../src/bin/download_clone.sh --source=http://overpass-api.de/api/ --db-dir="../db/" --meta=no &
If you want meta data, use --meta=yes instead of --meta=no. This downloads about 15 GB (25 GB with meta data) in several compressed files and uncompresses them to a ready-to-use database.
Now you can proceed with minute updates.
The standard use case is to set up the database with the whole planet data and including meta data. If you haven't downloaded an OSM XML planet file yet, you can fetch one for example with:
wget -O planet.osm.bz2 "http://ftp.heanet.ie/mirrors/openstreetmap.org/planet-latest.osm.bz2"
This file has a size of more than 20 GB. Thus, depending on your internet connection, it may take between 4 hours (fastest possible) and 22 hours (with 2 MBit) to download the file. If you are not working on your local machine, you may want the download to continue even if you logout. Use nohup for this:
nohup wget -O planet.osm.bz2 "http://ftp.heanet.ie/mirrors/openstreetmap.org/planet-latest.osm.bz2" &
Once you have the file, you can start the import. The import again may take up to 48 hours:
../src/bin/init_osm3s.sh planet.osm.bz2 "../db/" "../" --meta
or
nohup ../src/bin/init_osm3s.sh planet.osm.bz2 "../db/" "../" --meta &
You may need to adapt the parameters: The first parameter planet.osm.bz2 is the osm file to process, the second parameter "../db/" is the directory where the database should go to, and the third parameter "../" is the base directory of the executables, i.e. there must exist update_database in the subdirectory bin of the location where the third parameter points to.
You can also use any other osm file. If you want to save half of the hard disk space and reduce the startup and update time by up to two thirds, you can omit meta data by omitting the --meta parameter.
When this command is done, it writes Update complete. to the console (or to the file nohup.out if you have used nohup). At this point, the database can be used.
The following steps are only needed if you want minutely updates. In this case, run the following commands:
nohup ../bin/dispatcher --osm-base --meta --db-dir="../db/" &
chmod 666 "../db/osm3s_v0.7.3_osm_base"
(without --meta if you have not processed meta data)
The dispatcher has been successfully started if you find a line "Dispatcher just started." with correct date (in UTC) in the file transactions.log in the database directory.
nohup ../bin/fetch_osc.sh id "http://planet.openstreetmap.org/minute-replicate/" "../minute-diffs/" &
This should start to fill the directory "../minute-diffs/" with subdirectories which have three digits as name and finally contain files ending in osc.gz and state.txt.
nohup ../bin/apply_osc_to_db.sh "../db/" "../minute-diffs/" id --meta=yes &
(with --meta=no instead if you have not processed meta data)
These commands don't make sense without nohup, because the programs become daemons and never terminate. Once again, you need to replace parameters: you always need to replace id by the replicate id to start from. If you have obtained your database by cloning, you find the replicate id in the file replicate_id in the database directory. If you have imported the database from an OSM file, search on http://planet.openstreetmap.org/minute-replicate/ with your browser for the last replication diff that has been created before the planet creation date.
The other parameters need only to be adapted if you have chosen a different directory in a previous step: "../db/" is the directory of the database, "http://planet.openstreetmap.org/minute-replicate/" is the replicate diff's remote source, and "../minute-diffs/" is the directory where the minute diffs are stored until they have been applied.
Congratulations! Now you have a database mirror that can serve the entire world and is always only a few minutes behind the OSM main database. We can now startup the additional modules:
To make your instance public visible, you need to make it accessible by a web server. We show here how to do this with the Apache Server. Overpass API also works with every other web server that offers CGI. For example, it runs on http://overpass.osm.rambler.ru/cgi/ with nginx.
You need to edit Apache's configuration file and therefore you do need root permissions to do so.
Apache is configured with the file /etc/apache2/httpd.conf. My configuration file looks, in simplied form, as follows:
ServerName www.overpass-api.de LogLevel info DocumentRoot /path/to/osm-3s_v0.7/html/ ScriptAlias /api/ /path/to/osm-3s_v0.7.3/cgi-bin/ <Directory "/path/to/osm-3s_v0.7.3/cgi-bin/"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory>
The essential part is to replace all occurences of the path /path/to/osm-3s_v0.7.3/ to the real pathes. This configuration file tells Apache to serve the html files from the directory /path/to/osm-3s_v0.7.3/html/ and to call programs in /path/to/osm-3s_v0.7.3/cgi-bin/ by CGI. The ScriptAlias makes them visible externally as /api/ instead of /cgi-bin/. For the remaining options, please look into the Apache documentation.
You need to check whether the involved directories and their parent directories have sufficient permissions for all users, because otherwise Apache (with its proxy user www-data) cannot access them:
chmod 755 /path chmod 755 /path/to chmod 755 /path/to/osm-3s_v0.7.3 chmod 755 /path/to/osm-3s_v0.7.3/html chmod 755 /path/to/osm-3s_v0.7.3/bin chmod 755 /path/to/osm-3s_v0.7.3/cgi-bin chmod 755 /path/to/osm-3s_v0.7.3/db
Some directories are added later for some of the optional modules.
You can now (re-)start Apache to let the updated configuration come into effect:
sudo apache2ctl graceful
The XAPI wrapper delivers the XAPI compatibility layer. The XAPI wrapper only needs a management of temporary data. For this purpose you need to start another daemon process:
nohup bin/cleanup_xapi_tmp.sh &
No changes to the Apache cofiguration or to the database are necessary.
To use areas with Overpass API, you essentially need another permanent running process that generates the current areas from the existing data in batch runs.
First, you need to copy the rules directory into a subdirectory of the database directory:
cp -pR "../rules" "../db/"
The next step is to start a second dispatcher that coordinates read and write operations for the areas related files in the database:
nohup ../bin/dispatcher --areas --db-dir="../db/" &
chmod 666 "../db/osm3s_v0.7.3_areas"
The dispatcher has been successfully started if you find a line "Dispatcher just started." with correct date (in UTC) in the file transactions.log in the database directory.
The third step then is to start the rule batch processor as a daemon:
nohup ../bin/rules_loop.sh "../db/" &
Now we don't want this process to impede the real business of the server. Therefore, I strongly suggest to priorize this process down. To do this, you need to find with
ps -ef | grep rules
the PIDs belonging to the processes rules_loop.sh and ./osm3s_query --progress --rules. Run for each of the two PIDs the commands:
renice -n 19 -p PID ionice -c 2 -n 7 -p PID
The second command is not available on FreeBSD. This is not at big problem, because this rescheduling just means giving hints to the operating system.
When the batch process has completed its first cycle, all areas get accessible via the database at once. This may take up to 24 hours.
To make the custom output feature operational, you only need to copy the default templates into the corresponding subdirectory of the database:
cp -pR "../templates" "../db/"
No runtime component or change in the Apache configuration is needed.