New in version 1.2.0.
MapProxy supports multiple backends to store the internal tiles. The default backend is file based and does not require any further configuration.
You can configure a backend for each cache with the cache option. Each backend has a type and one or more options.
caches:
mycache:
sources: [...]
grids: [...]
cache:
type: backendtype
backendoption1: value
backendoption2: value
The following backend types are available.
This is the default cache type and it uses a single file for each tile. Available options are:
The directory layout MapProxy uses to store tiles on disk. Defaults to tc which uses a TileCache compatible directory layout (zz/xxx/xxx/xxx/yyy/yyy/yyy.format). mp uses a directory layout with less nesting (zz/xxxx/xxxx/yyyy/yyyy.format`). tms uses TMS compatible directories (zz/xxxx/yyyy.format). quadkey uses Microsoft Virtual Earth or quadkey compatible directories (see http://msdn.microsoft.com/en-us/library/bb259689.aspx). arcgis uses a directory layout with hexadecimal row and column numbers that is compatible to ArcGIS exploded caches (Lzz/Rxxxxxxxx/Cyyyyyyyy.format).
Note
tms, quadkey and arcgis layout are not suited for large caches, since it will create directories with thousands of files, which most file systems do not handle well.
When true MapProxy will use the actual grid name in the path instead of the SRS code. E.g. tiles will be stored in ./cache_data/mylayer/mygrid/ instead of ./cache_data/mylayer/EPSG1234/.
New in version 1.5.0.
Directory where MapProxy should directly store the tiles. This will not add the cache name or grid name (use_grid_name) to the path. You can use this option to point MapProxy to an existing tile collection (created with gdal2tiles for example).
New in version 1.5.0.
Directory where MapProxy should write lock files when it creates new tiles for this cache. Defaults to cache_data/tile_locks.
New in version 1.6.0.
Use a single SQLite file for this cache. It uses the MBTile specification.
Available options:
Directory where MapProxy should write lock files when it creates new tiles for this cache. Defaults to cache_data/tile_locks.
New in version 1.6.0.
You can set the sources to an empty list, if you use an existing MBTiles file and do not have a source.
caches:
mbtiles_cache:
sources: []
grids: [GLOBAL_MERCATOR]
cache:
type: mbtiles
filename: /path/to/bluemarble.mbtiles
Note
The MBTiles format specification does not include any timestamps for each tile and the seeding function is limited therefore. If you include any refresh_before time in a seed task, all tiles will be recreated regardless of the value. The cleanup process does not support any remove_before times for MBTiles and it always removes all tiles. Use the --summary option of the mapproxy-seed tool.
The note about bulk_meta_tiles for SQLite below applies to MBtiles as well.
New in version 1.6.0.
Use SQLite databases to store the tiles, similar to mbtiles cache. The difference to mbtiles cache is that the sqlite cache stores each level into a separate database. This makes it easy to remove complete levels during mapproxy-seed cleanup processes. The sqlite cache also stores the timestamp of each tile.
Available options:
Directory where MapProxy should write lock files when it creates new tiles for this cache. Defaults to cache_data/tile_locks.
New in version 1.6.0.
caches:
sqlite_cache:
sources: [mywms]
grids: [GLOBAL_MERCATOR]
cache:
type: sqlite
directory: /path/to/cache
Note
New in version 1.10.0.
All tiles from a meta tile request are stored in one transaction into the SQLite file to increase performance. You need to activate the bulk_meta_tiles option to get the same benefit when you are using tiled sources.
caches:
sqlite_cache:
sources: [mytilesource]
bulk_meta_tiles: true
grids: [GLOBAL_MERCATOR]
cache:
type: sqlite
directory: /path/to/cache
New in version 1.3.0.
Store tiles inside a CouchDB. MapProxy creates a JSON document for each tile. This document contains metadata, like timestamps, and the tile image itself as a attachment.
Besides a running CouchDB you will need the Python requests package. You can install it the usual way, for example pip install requests.
You can configure the database and database name and the tile ID and additional metadata.
Available options:
Directory where MapProxy should write lock files when it creates new tiles for this cache. Defaults to cache_data/tile_locks.
New in version 1.6.0.
Each tile document needs a unique ID. You can change the format with a Python format string that expects the following keys:
The default ID uses the following format:
%(grid_name)s-%(z)d-%(x)d-%(y)d
Note
You can’t use slashes (/) in CouchDB IDs.
MapProxy stores a JSON document for each tile in CouchDB and you can add additional key-value pairs with metadata to each document. There are a few predefined values that MapProxy will replace with tile-depended values, all other values will be added as they are.
Predefined values:
caches:
mycouchdbcache:
sources: [mywms]
grids: [mygrid]
cache:
type: couchdb
url: http://localhost:9999
db_name: mywms_tiles
tile_metadata:
mydata: myvalue
tile_col: '{{x}}'
tile_row: '{{y}}'
tile_level: '{{z}}'
created_ts: '{{timestamp}}'
created: '{{utc_iso}}'
center: '{{wgs_tile_centroid}}'
MapProxy will place the JSON document for tile z=3, x=1, y=2 at http://localhost:9999/mywms_tiles/mygrid-3-1-2. The document will look like:
{
"_attachments": {
"tile": {
"content_type": "image/png",
"digest": "md5-ch4j5Piov6a5FlAZtwPVhQ==",
"length": 921,
"revpos": 2,
"stub": true
}
},
"_id": "mygrid-3-1-2",
"_rev": "2-9932acafd060e10bc0db23231574f933",
"center": [
-112.5,
-55.7765730186677
],
"created": "2011-12-15T12:56:21Z",
"created_ts": 1323953781.531889,
"mydata": "myvalue",
"tile_col": 1,
"tile_level": 3,
"tile_row": 2
}
The _attachments-part is the internal structure of CouchDB where the tile itself is stored. You can access the tile directly at: http://localhost:9999/mywms_tiles/mygrid-3-1-2/tile.
New in version 1.6.0.
Store tiles in a Riak cluster. MapProxy creates keys with binary data as value and timestamps as user defined metadata. This backend is good for very large caches which can be distributed over many nodes. Data can be distributed over multiple nodes providing a fault-tolernt and high-available storage. A Riak cluster is masterless and each node can handle read and write requests.
You will need the Python Riak client version 2.0 or newer. You can install it in the usual way, for example with pip install riak. Environments with older version must be upgraded with pip install -U riak.
Available options:
myriakcache:
sources: [mywms]
grids: [mygrid]
type: riak
nodes:
- host: 1.example.org
pb_port: 9999
- host: 1.example.org
- host: 1.example.org
protocol: pbc
bucket: myriakcachetiles
default_ports:
pb: 8087
http: 8098
New in version 1.10.0.
Store tiles in a Redis in-memory database. This backend is useful for short-term caching. Typical use-case is a small Redis cache that allows you to benefit from meta-tiling.
Your Redis database should be configured with maxmemory and maxmemory-policy options to limit the memory usage. For example:
maxmemory 256mb
maxmemory-policy volatile-ttl
You will need the Python Redis client. You can install it in the usual way, for example with pip install redis.
Available options:
redis_cache:
sources: [mywms]
grids: [mygrid]
cache:
type: redis
default_ttl: 600
New in version 1.10.0.
Store tiles in a geopackage database. MapProxy creates a tile table if one isn’t defined and populates the required meta data fields. This backend is good for datasets that require portability. Available options:
You can set the sources to an empty list, if you use an existing geopackage file and do not have a source.
caches:
geopackage_cache:
sources: []
grids: [GLOBAL_MERCATOR]
cache:
type: geopackage
filename: /path/to/bluemarble.gpkg
table_name: bluemarble_tiles
Note
The geopackage format specification does not include any timestamps for each tile and the seeding function is limited therefore. If you include any refresh_before time in a seed task, all tiles will be recreated regardless of the value. The cleanup process does not support any remove_before times for geopackage and it always removes all tiles. Use the --summary option of the mapproxy-seed tool.
New in version 1.10.0.
Store tiles in a Amazon Simple Storage Service (S3).
You will need the Python boto3 package. You can install it in the usual way, for example with pip install boto3.
Available options:
Note
The hierarchical directory_layouts can hit limitations of S3 “if you are routinely processing 100 or more requests per second”. directory_layout: reverse_tms can work around this limitation. Please read S3 Request Rate and Performance Considerations for more information on this issue.
cache:
my_layer_20110501_epsg_4326_cache_out:
sources: [my_layer_20110501_cache]
cache:
type: s3
directory: /1.0.0/my_layer/default/20110501/4326/
bucket_name: my-s3-tiles-cache
globals:
cache:
s3:
profile_name: default
New in version 1.10.0.
Store tiles in ArcGIS compatible compact cache files. A single compact cache .bundle file stores up to about 16,000 tiles. There is one additional .bundlx index file for each .bundle data file.
Only version 1 of the compact cache format (ArcGIS 10.0-10.2) is supported. Version 2 (ArcGIS 10.3 or higher) is not supported at the moment.
Available options:
You can set the sources to an empty list, if you use an existing compact cache files and do not have a source.
The following configuration will load tiles from /path/to/cache/L00/R0000C0000.bundle, etc.
caches:
compact_cache:
sources: []
grids: [webmercator]
cache:
type: compact
version: 1
directory: /path/to/cache
Note
The compact cache format does not include any timestamps for each tile and the seeding function is limited therefore. If you include any refresh_before time in a seed task, all tiles will be recreated regardless of the value. The cleanup process does not support any remove_before times for compact caches and it always removes all tiles. Use the --summary option of the mapproxy-seed tool.
Note
The compact cache format is append-only to allow parallel read and write operations. Removing or refreshing tiles with mapproxy-seed does not reduce the size of the cache files. Therefore, this format is not suitable for caches that require frequent updates.