Closed mrgloom closed 7 years ago
May have to rerun with options -z
and -e
, where the parameter for -z
is the missing zoom level.
gdal2tilesp.py --help
-z ZOOM, --zoom=ZOOM Zoom levels to render (format:'2-5' or '10').
-e, --resume Resume mode. Generate only missing files.
Discussion at http://gis.stackexchange.com/questions/224778/what-is-the-fastest-version-of-gdal2tiles
I tried to build tiles with your version of gdal2tiles
and than heal missing tiles eith -e
parameter, but it does not helped (part with tiles creating take 13m and part with tiles heal take about 1m)
Also seems missed tiles are on low resolution part of pyramid.
That's is not directly related but can be helpfull, becasue bug can be not in missed files but maybe in tilemapresource.xml
?
I have 2 geotiff orthophotomaps that have some overlap.
1) Build global .vrt via gdalbuildvrt
from these images and build tiles via original gdal2tiles
(everything is fine)
2) Build tiles separately for 1st image and 2nd image (everything is fine)
3) Use tilemapresource.xml
from [1] to replace tilemapresource.xml
in [2] for 1st and 2nd image (everything is fine)
4) Copy tiles from 1st image tiles and 2nd image tiles to separate dir and then copy tilemapresource.xml
from [1] (something is wrong, some tiles are missed!)
This works for me, although it's not clear why it has to be done.
export FORMAT=PNG
export MIN=7
export MAX=9
export INPUT_FILE=big.vrt
gdal2tilesp.py -z $MIN-$MAX -w all -f $FORMAT $INPUT_FILE
gdal2tilesp.py -z 7 -w all -e -f $FORMAT $INPUT_FILE
gdal2tilesp.py -z 8 -w all -e -f $FORMAT $INPUT_FILE
I had the same problem. Try close pool of processes only once - in the end of script ,out the loops. If you are using Unix base OS you also should add pool.join() to avoid of zombie processes. Last commit it's for windows based script, so if add pool.join() windows will hang.
Thank you, @bolshoydi. This is useful. For reference this is commit 998ce9099957cb7ea837c91930cf90e0a8b14d63 which I believe you did to make a fix for Windows, so it does not have pool.join()
.
@roblabs , i Also tried get back to proceses without pool. And process.join()
works fine for Windows and such method gave me 25% of time improvement compared with Pool
. but after first iteration of base level starts process of overview tiles (i don't know why, because process.join()
stands before overview tiles and it should wait till all the base tiles will be generated) and it begin making overview tiles of just generated base tiles (about 100 tiles), and overview tiles function throws lots of errors like File doesn't exist, and it is obvious, because not all base tiles were generated. So after script finishes it's work - all the tiles are generated (1808 tiles on my test tiff). Now i thinking about how to fix throwing errors problem on not tiled images yet.
@mrgloom @roblabs see new commit, it should fix issue of missing tiles. Tested on tif (200Mb 12-18 zoom levels) and vrt of jpg (2.86Gb 12-21 zoom levels) 72b40b7
This is excellent work, @bolshoydi. 👍
See comments on https://github.com/roblabs/gdal2tilesp/pull/6#pullrequestreview-19587628
As I think those will be worthwhile to fix up before we attempt to push upstream. In the meantime, @mrgloom you can try out 72b40b7 .
Fixed with PR #6. Thanks again @bolshoydi. I just tested it and the speed improvement is impressive.
Something goes wrong:
Generating Base Tiles:
Traceback (most recent call last):
File "/home/user/bin/python-gdal/gdal2tiles_v5.py", line 2688, in <module>
total = queue.get(timeout=1)
File "/usr/lib/python2.7/multiprocessing/queues.py", line 132, in get
raise Empty
Queue.Empty
As workaround I tile each orthophoto separately by original single threaded gdal2tiles.py and then merge tiles.
Here is merge_tiles.py code, maybe it will be faster if we do it in parallel by pyramid level.
import sys
import os
from shutil import copyfile
import numpy as np
import cv2
def merge_tiles(tile_filename, global_tile_filename):
tile_1= cv2.imread(tile_filename, cv2.CV_LOAD_IMAGE_UNCHANGED)
tile_2= cv2.imread(global_tile_filename, cv2.CV_LOAD_IMAGE_UNCHANGED)
if(len(tile_1.shape)!=3 or len(tile_2.shape)!=3):
print 'tile_1.shape', tile_1.shape
print 'tile_2.shape', tile_2.shape
assert(len(tile_1.shape)==3 and len(tile_2.shape)==3)
if(tile_1.shape[2]!=4 or tile_2.shape[2]!=4):
print 'tile_1.shape', tile_1.shape
print 'tile_2.shape', tile_2.shape
assert(tile_1.shape[2]==4 and tile_2.shape[2]==4)
merged_tile= np.maximum(tile_1, tile_2)
return merged_tile
if(len(sys.argv)!=3):
print "Wrong number of arguments"
sys.exit(1)
part_tiles_dir= sys.argv[1]
global_tiles_dir= sys.argv[2]
print 'part_tiles_dir', part_tiles_dir
print 'global_tiles_dir', global_tiles_dir
tiles_counter=0
#for each level in part_tiles_dir
for item in os.listdir(part_tiles_dir):
if os.path.isdir(os.path.join(part_tiles_dir, item)):
#print 'Level:',item
part_tiles_pyr_level_dir= os.path.join(part_tiles_dir, item)
global_tiles_pyr_level_dir= os.path.join(global_tiles_dir, item)
#print 'part_tiles_pyr_level_dir', part_tiles_pyr_level_dir
#print 'global_tiles_pyr_level_dir', global_tiles_pyr_level_dir
#for every subfolder in pyr level
for subfolder in os.listdir(part_tiles_pyr_level_dir):
if os.path.isdir(os.path.join(part_tiles_pyr_level_dir, subfolder)):
pyr_level_subfolder_dir= os.path.join(part_tiles_pyr_level_dir, subfolder)
#print 'pyr_level_subfolder_dir', pyr_level_subfolder_dir
for tile in os.listdir(pyr_level_subfolder_dir):
tile_filename= os.path.join(pyr_level_subfolder_dir, tile)
#print 'tile_filename', tile_filename
tokens= tile_filename.split(os.sep)
#print 'tokens', tokens
global_tile_filename= os.path.join(global_tiles_dir, tokens[-3], tokens[-2], tokens[-1])
#print 'global_tile_filename', global_tile_filename
#If tile not exist copy it, else merge tiles
if(os.path.exists(global_tile_filename)):
#merge
merged_tile= merge_tiles(tile_filename, global_tile_filename)
cv2.imwrite(global_tile_filename, merged_tile)
else:
tile_dir=os.path.dirname(global_tile_filename)
#print 'tile_dir', tile_dir
if not os.path.exists(tile_dir):
os.makedirs(tile_dir)
#print 'tile_filename', tile_filename
#print 'global_tile_filename', global_tile_filename
copyfile(tile_filename, global_tile_filename)
tiles_counter+=1
#else:
# print item,'is not a dir!'
print 'Total number of tiles', tiles_counter
@mrgloom i think it depends on CPU performance. I've tested 8 CPU 2 core i7 processor, and everything was fine. From the point of logic the queue shouldn't be empty because processes are already started. and each started process practically immediately puts tcount in queue. Try to raise timeout to 3 or 5 seconds.
About separate ortophoto: if they touch each other border to border without additional background you can build VRT using gdalbuildvrt
and then use it as input file for gdal2tilesp
Did you mean this line? https://github.com/roblabs/gdal2tilesp/blob/master/gdal2tilesp.py#L2687
Also I tried to set proc_count
to 1.
https://github.com/roblabs/gdal2tilesp/blob/master/gdal2tilesp.py#L2662
and it also fails
Generating Base Tiles:
exception error: Traceback (most recent call last):
File "/home/user/bin/python-gdal/gdal2tiles_v5.py", line 2644, in worker_base_tiles
gdal2tiles.generate_base_tiles(cpu, queue)
File "/home/user/bin/python-gdal/gdal2tiles_v5.py", line 1383, in generate_base_tiles
band_list=list(range(1, self.dataBandsCount + 1)))
File "/usr/lib/python2.7/dist-packages/osgeo/gdal.py", line 821, in ReadRaster
band_list, buf_pixel_space, buf_line_space, buf_band_space )
RuntimeError: Illegal values for buffer size
If it matters I have 32 CPU server.
Seems parallel version of gdal2tiles produce missing tiles at some zoom levels:
Same as for this patch: https://trac.osgeo.org/gdal/ticket/4379
Code I used for tile generation: