gimli-org / gimli

Geophysical Inversion and Modeling Library :earth_africa:
https://www.pygimli.org
Other
379 stars 137 forks source link

Calculating Ray Paths and Response With Known Velocity Model on a Known Mesh (version 1.1.0) #289

Closed bflinchum closed 2 years ago

bflinchum commented 3 years ago

Problem description

Issues with the change of version. My scripts worked when I was on a different version of the code but now fail to work with version 1.1.0 (the widely distributed version).

Your environment

Operating system: Mac Python version: 3.7.9 pyGIMLi version: 1.1.0 Way of installation: manual compilation from source ( I changed to V1.1.0 to match what can be installed via conda) Downloaded here: https://github.com/gimli-org/gimli/tree/v1.1.0. I would have loved to install via conda but don't have it on this computer and fear it would mess a lot of stuff up.

Question/Bug

A while ago I updated from version 1.0 to 1.1 and had to modify my scripts. You guys helped me through this by moving me to the developers branch and provided me with some fixes to my issue: https://github.com/gimli-org/gimli/issues/235

At this stage I have had to "downgrade (?)" to version 1.1.0 because I am teaching my graduate students how to operate the code and the easiest install is through conda (thanks for that by the way!). I have a bootstrapping inversion where I re-sample all of the picks and then invert the data over and over again. At the end of the inversion I average together all of the velocities and want to trace the rays and calculate the inversion statistics on this. Previously (https://github.com/gimli-org/gimli/issues/235), you told me to just set up the inversion again with the average velocity as the starting model and set the maxIter flag = 0. There are now a few issues:

  1. the maxIter flag doesn't seem to be working. No mater what I set this value too the inversion always goes to 20 iterations.
  2. the startmodel flag doesn't appear to work either. I used to be able to:

ra.invert(data=ra.data,mesh=mesh,lam=smoothing,zWeight=vertWeight,startmodel=1./avgVel, maxIter=0)

This line would give me what I wanted. But in version 1.1.0 this till runs for 20 iterations. I found a way around the maxIter flag by setting the variable self.inv.maxIter=0. Even despite this, it clearly does at least one iteration because I have done this with a starting gradient model and the result has some structure in it. My guess is that the inversion ran but the forward model has not been run through that iteration.

So my question is how do I run the forward model (get the rays, ray coverage, standard ray coverage) and update the traveltime structure (in my case the varible ra) if I know the the velocity model and have the mesh?

What I want is the following:

  1. Image of the final averaged velocity model and vertical gradient:

ModelResults

  1. Image of the rays through the averaged model (and the coverage and standard coverage to write to a vtk file):

ModelRayCoverage

  1. The inversion statistics of this final model:
    Chi2 = 0.7523
    RMS = 0.0015 s
    RRMS = XX %

Here is my code:


import os
import random
import time
import pygimli as pg
from pygimli.physics import TravelTimeManager
import matplotlib.pyplot as plt
import numpy as np
from pygimli.physics.traveltime import load as loadGimliTT

if __name__ == "__main__":

    #************************** INPUTS ***************************************

    """
    Space separated file with shot location as the first column, geophone 
    location as the second column, and travel-time (in seconds) in the third
    column 
    """
    pickFile = 'TLV_2016_0816_picks.txt' #Pick file from picker.py

    """
    The topo file is a three column file sepearted by spaces. The first column
    is the x-locaiton that has been adjusted to map space. This number will be
    a few centimeters smaller than the field value because it is the land
    distance projected into map distance. The second column is the elevation
    in meters and the third column is the geophone location at its orignal
    spacinge. I call it it the old locaitons. This distance will be too long
    on lines with a lot of topography. The transformation is important for 
    projecting the VTK files to the topography. The old locs column will 
    match the locations in the pick file!!
    """
    topoFile = 'TLV_2016_0816_topoLiDARm.txt'

    #Line Location for spatially located VTK and GMT Files
    #MUST MATCH THE LOCATION USED TO EXTRACT THE TOPOGRAPHY!!!
    SoL = [464085.77,4563854.83]
    EoL = [464000.61,4563481.57]

    """    
    Set the errors for the inversion weighting. This uses a simple linear
    function to generate errors in seconds:
        error = offset*maxError/lineLength + minError
    It is advised that you don't go less than 0.001 (1 ms) for the minErro.
    I usally fit about 5-6 ms on the farthest offset picks. Thus default values
    are maxError = 0.005 and minError = 0.001
    """
    maxError = 0.005
    minError = 0.001

    """
    MESH PARAMTERS
    maxDepth = maximum depth of the model sets (paraDepth in function)
    meshDx = 0.33; a decimal number defining how many nodes between geophones. 
        0.33 = 3. I you wanted 10 it would be 0.1. If you wanted two it would 
        be 0.5 (sets paraDX in function)
    maxTriArea = maximum area of each triangle. The smaller you make this
        number the more cells you will have. This means the more accurate your
        forward model is but at the cost of cmputaiton time. (sets 
        paraMaxCellSize in functino call)
    """
    maxDepth = 60
    meshDx = 0.3
    maxTriArea = 7

    """
    INVERSION PARAMTERS
    smoothing = sets lam in the model. Make this number larger for smoothing. 
        This is the scalar value on teh weighting matrix
    vertWeight = sets the zWeight in the function. This value controls how
        much vertical versus horizontal smoothing there is. This value needs
        to stay between 0 and 1. A value of 1 is a very laterally smooth model
        a value close to zero lets a lot of lateral hetergeneity in.
    minVel = this is the minimum velocity at the top of the model
    maxVel = this is the maximum velocity at teh bottom of the model    
    """
    smoothing = 100
    vertWeight = 0.1
    minVel = 1000
    maxVel = 4500
    maxIterations = 10

    """
    Bootstrapping parameters. This algorithm will run the inversion over and 
    over again witht the same inversion parameters except that it removes
    a percentage (or fraction) of the data prior to each run. This works well
    if you have a couple thousand picks.
    fract2Remove = the fraction of picks to remove each bootstrap iteration
    nRuns = total number of bootstrap iterations
    path2Runs = path to the runs folder it must have the / at the end since it's a directory
    """
    fract2Remove = 0.4
    nRuns = 10
    path2Runs = "/Users/bflinch/Dropbox/Clemson/Research/ResearchProjects/PythonCodes/SeismicInversionBoiler/StreamlinedScripts/runs/"
    #*************************************************************************

    #********** DO INVERSION***************************************************
    for i in range(0,nRuns):
        folderName = "run"+str(i+1)
        #Convert the file
        inFile = resamplePicks(pickFile,fract2Remove,topoFile, minError, maxError)
        #Read in the file with gimli method
        data = loadGimliTT(inFile)

        #Create the structure required for inversion using TravelTimeManager and data container
        ra = TravelTimeManager(data)
        #Generate the mesh
        mesh = ra.createMesh(data=ra.data,paraDepth=maxDepth,paraDX=meshDx,paraMaxCellSize=maxTriArea)

        #Do the Inversion (time the inversion)
        start = time.time()
        ra.inv.maxIter = maxIterations
        ra.invert(data=ra.data,mesh=mesh,lam=smoothing,zWeight=vertWeight,useGradient=True,vTop=minVel,vBottom=maxVel)
        end = time.time()

        if i == 0:
            velModels = np.zeros((len(mesh.cellCenters()),nRuns))
            velModels[:,i] = ra.paraModel()
        else:
            velModels[:,i] = ra.paraModel()
            np.savetxt(path2Runs+'allVelocityModels.txt',velModels) #fail safe write the data each iteration in case of crash

        print('\n\nCompleted Run ' + str(i+1) + ' of ' + str(nRuns) + ' Inversion...\n\n')
        plotAndSaveResults(path2Runs,folderName,mesh,ra,start,end)
        data.save(path2Runs+folderName+'/'+pickFile.split('.')[0]+'_run'+str(i+1)+'.txt')

    #Trace the rays through the avreage model:

    #****** HAVING ISSUES HERE ***************
    print('\n\n******Starting Final Averaged Run*****\n\n')
    inFile = resamplePicks(pickFile,0,topoFile,minError,maxError)

    #Read in the file with ALL the picks
    data = loadGimliTT(inFile)
    #Create the structure required for inversion using TravelTimeManager and data container

    ra = TravelTimeManager(data)
    #Generate the mesh
    mesh = ra.createMesh(data=ra.data,paraDepth=maxDepth,paraDX=meshDx,paraMaxCellSize=maxTriArea)
    start = time.time()

    #THIS SEEMS REDUDANT. I shouldn't ahve to re-invert the data if I just want
    #to calcuate the response through a velocity model. 
    #I did figure out how to get the chi^2 value and response but I don't know
    #how to get the data to update in the ra structure. I ran the inversion
    #Because in last help I need teh structure ra to be formated by the inversion.
    ra.inv.maxIter = 0
    ra.invert(data=ra.data,mesh=mesh,lam=smoothing,zWeight=vertWeight,startmodel=1./avgVel)
    end = time.time()   

    print('\n\n******Completed Final Averaged Run*****\n\n')
    folderName = 'FinalAveragedRun'
    finalModel = True
    plotAndSaveResults(path2Runs,folderName,mesh,ra,start,end,finalModel)
    np.savetxt(path2Runs+folderName+'/'+'allVelocityModels.txt',velModels)

def plotAndSaveResults(path,folderName,mesh,ra,start,end,finalModel):

    #Plotting paramters. Here instead of in function call for now.
    minVelocity = 300
    maxVelocity = 4500
    minVertGrad = 0
    maxVertGrad = 300
    minHist = 0
    maxHist = 25
    minRayCoverage = 0
    maxRayCoverage = 10

    #Create the folder
    os.makedirs(path+folderName)

    #Calculate Vertical Gradient
    if finalModel:
        vels = np.loadtxt(path + 'allVelocityModels.txt')
        vel = np.mean(vels,axis=1)
        vertGrad = pg.solver.grad(mesh,vel)
        std = np.std(vels,axis=1)

        response = ra.fw.fop.response(1./vel)
        chi2 = ra.fw.chi2(response)
        modelRMS = pg.utils.rms(ra.fw.dataVals-response)*1000

        #Print out important model Inversion results
        s2 = "Chi2 = " + str("{:0.4f}".format(chi2)) + "\n"
        s3 = "RMS = " + str("{:0.4f}".format(modelRMS)) + " ms\n"
        print(s2)
        print(s3)
        outFile = open(path+folderName+'/'+'InversionStats.txt','w')
        outFile.write(s2+s3)
        outFile.close()  
    else:
        vel = ra.paraModel()
        vertGrad = pg.solver.grad(mesh,vel)
        std = np.zeros(len(vel))

        #Print out important model Inversion results
        s1 = "Time Elapse (min): " + str("{:0.2f}".format((end - start)/60)) + " min\n"
        s2 = "Chi2 = " + str("{:0.4f}".format(ra.inv.chi2())) + "\n"
        s3 = "RMS = " + str("{:0.4f}".format(ra.inv._inv.absrms())) + " s\n"
        s4 = "Total Iterations = " + str("{:0.0f}".format(ra.inv.maxIter)) + "\n"
        print(s1)
        print(s2)
        print(s3)
        print(s4)
        outFile = open(path+folderName+'/'+'InversionStats.txt','w')
        outFile.write(s1+s2+s3+s4)
        outFile.close()    

    #Save Results as a .vtk file (Paraview)
    mesh.addData('Velocity',vel)
    mesh.addData('Vertical Velocity Gradient',-vertGrad[:,1])
    mesh.addData('Ray Coverage',ra.coverage())
    mesh.addData('Standard Ray Coverage',ra.standardizedCoverage())
    mesh.exportVTK(path+folderName+'/'+pickFile.split('.')[0] + '_gimliResults.vtk' )

    #Spatially Locate the Profile
    spatiallyLocateVTKFile(path+folderName+'/'+pickFile.split('.')[0] + '_gimliResults.vtk',SoL,EoL)

    #******************** PLOT AND SAVE RESULTS ******************************
    fig1 = plt.figure('Model Results')
    fig1.set_size_inches([8,8])
    ax = fig1.add_subplot(211)
    pg.show(mesh, vel, coverage=ra.standardizedCoverage(),cMin=minVelocity,cMax=maxVelocity,cMap='jet_r',ax=ax)
    pg.viewer.mpl.drawSensors(ax, data.sensorPositions(), diam=0.5, color="k")
    ax.set_xlabel('Distance (m)')
    ax.set_ylabel('Elevation (m)')
    ax.set_title('Velocity (m/s)')
    ax2 = fig1.add_subplot(212)
    pg.show(mesh, data=-vertGrad[:,1], coverage=ra.standardizedCoverage(),cMin=minVertGrad,cMax=maxVertGrad,ax=ax2,cMap='plasma')
    pg.viewer.mpl.drawSensors(ax2, data.sensorPositions(), diam=0.5, color="k")
    ax2.set_xlabel('Distance (m)')
    ax2.set_ylabel('Elevation (m)')
    ax2.set_title('Vertical Gradient (m/s/m)')
    fig1.savefig(path+folderName+'/'+'ModelResults.png',dpi=600)
    #ra.showResult(cMin=500,cMax=4500,cmap='jet_r')

    fig2 = plt.figure('Ray Coverage')
    fig2.set_size_inches([8,8])
    ax = fig2.add_subplot(211)
    pg.show(ra.paraDomain, vel, coverage=ra.standardizedCoverage(),cMin=minVelocity,cMax=maxVelocity,cMap='jet_r',ax=ax)
    pg.viewer.mpl.drawSensors(ax, data.sensorPositions(), diam=0.5, color="k")
    ra.drawRayPaths(ax,color="k")
    ax.set_xlabel('Distance (m)')
    ax.set_ylabel('Elevation (m)')
    ax.set_title('Ray Paths')
    ax2 = fig2.add_subplot(212)
    tmpCov = ra.coverage()
    tmpCov[np.isneginf(tmpCov)] = 0
    pg.show(mesh, data=tmpCov, coverage=ra.standardizedCoverage(),cMin=minRayCoverage,cMax=maxRayCoverage,ax=ax2,cMap='plasma')
    pg.viewer.mpl.drawSensors(ax2, data.sensorPositions(), diam=0.5, color="k")
    ax2.set_xlabel('Distance (m)')
    ax2.set_ylabel('Elevation (m)')
    ax2.set_title('Ray Coverage (# of rays in each cell)')
    fig2.savefig(path+folderName+'/'+'ModelRayCoverage.png',dpi=600)

    fig3 = plt.figure('Model Fits')
    fig3.set_size_inches([11,5])
    ax1 = fig3.add_subplot(121)
    pickTT = np.array(ra.data('t'))
    modTT = np.array(ra.inv.response)
    ax1.hist2d(pickTT*1000,(pickTT-modTT)*1000,bins=75,vmin=minHist,vmax=maxHist,cmap='plasma')
    ax1.plot([0,200],[0,0],'k--',linewidth=2)
    ax1.set_xlabel('Observed Travel-times (ms)')
    ax1.set_ylabel('Residual (Obs - Mod) (ms)')

    ax2 = fig3.add_subplot(122)
    ax2.hist2d(pickTT*1000,modTT*1000,bins=100,vmin=minHist,vmax=maxHist,cmap='plasma')
    ax2.plot([0,200],[0,200],'k',linewidth=2)
    ax2.plot([0,200],[-5,195],'k--',linewidth=2)
    ax2.plot([0,200],[5,205],'k--',linewidth=2)
    ax2.set_xlabel('Observed Travel-times (ms)')
    ax2.set_ylabel('Modelted Travel-times (ms)')
    fig3.savefig(path+folderName+'/'+'ModelFits.png',dpi=600)

def resamplePicks(pickFile,fract2Rmv,topoFile,minError,maxError):
    f = open(pickFile,'r')
    #Split the lines based on the newline charachter (nre row in list ever \n)
    lines = f.read().split('\n')
    #Close the file for good file keeping
    f.close()

    #Get total number of picks
    totalPicks = len(lines)
    #Calculate the total number to keep (1 - fract2Remove)
    picks2Keep = int(np.round((1-fract2Rmv)*totalPicks,0))

    topo = np.loadtxt(topoFile)    
    #geoLocs is the topographically adjusted x-location
    geoLocs = topo[:,0]
    #oldLocs is the x-location of the geophnes. This needs to match the pick file
    oldLocs = topo[:,2]   

    #Get a list of indicies to keep using the random package
    lines = random.sample(lines,picks2Keep)
    gimliPicks = np.zeros((1,3))   
    #Begin to loop over all ines in the file
    for i in range(0,len(lines)):
        #extract the first line and store it as a string (check with print(tmpString))
        tmpString = lines[i]        
        #If the first charachter is not a space then proceed
        if tmpString != '':
            #Split the string into a list based on spaces. Modify here if it's comma separated
            data = tmpString.split()
            #xPick = float(data[1])/gx + 1
            #This uses logic to find the index of the value closes to the geophone location
            #The index is the line number in the topography file
            xPick = np.argmin(np.sqrt((float(data[1])-oldLocs)**2))+1
            #print(xPick)
            #Extract the picked travel time
            tPick = float(data[2])
            #sPick = float(data[0])/gx + 1
            #Extract the index of the shot location
            ##The index is the line number in the topography file
            sPick = np.argmin(np.sqrt((float(data[0])-oldLocs)**2))+1       
        #print([xPick,sPick])
        #Do some error checking. Pygimli does not like repeat picks or the pick at the shot location
        #If the shot location does not equal (!=) the geophone location and the pick is greater than zero
        #Then add the pick to the 3 column matrix that is going to be written out.
        if tPick > 0 and xPick != sPick:
            tmpMat = [sPick,xPick,tPick]
            gimliPicks = np.row_stack((gimliPicks,tmpMat))
            #print([sPick,xPick,tPick])       
        #Go to the next line in the file
        i = i + 1   
    #Remove the first row the matrix because I initalized it with zeros.
    gimliPicks = np.delete(gimliPicks,0,axis=0)

    #Calculate Error
    minShot = geoLocs[int(np.min(gimliPicks[:,0])-1)]
    maxShot = geoLocs[int(np.max(gimliPicks[:,0])-1)]
    lineLength = maxShot - minShot
    print('Line Length = ' + str(lineLength) + ' m')

    """
    ****************** GIMLI INPUT FILE DESCRIPTION *************************
    numberOfRows(integer) #Comment line
    x1 e1
    x2 e2
    .
    .
    .
    xn en
    numberOfPicks (integer) #Comment Line
    # s g t e[optional] <-- REQUIRED LINE NOT A COMMENT!!
    s1 g1 t1
    s2 g2 t2
    .
    .
    .
    sn gn tn

    **Note sn, gn, tn are index values to the row number in the topography above

    The gimli input file has two pieces to it. The first piece is the topography
    the first comment line tells you that its the geophone locations and elevaitons
    these values will be referenced as indexs. Thus index = 1 will be the first
    geophone/elevation pair listed

    After the topogrpahy is written a line telling gimli how many picks there
    are is required. A comment line with teh flags s, g, t, e[optiona] is required
    s = source index, g = geophone index, t = travel-time pick (seconds),
    e = error in seconts (picking error for chi^2 calc)

    The bigest difference between the input file and gimli input is that the 
    gimli input references everythign as indicies not values. The indicies are
    from the topgraphy at the top of the file.
    """
    #Write out inputfile
    #Rename the input file. Remove the .txt by spliting the file using a .
    #Then add _gimliInput.txt to the file
    #Possible add path here? Bugs will occcur if a file name has a . in it...
    gimliInFile = pickFile.split('.')[0] + '_gimliInput.txt' 
    #Open/create this file for writing
    f = open(gimliInFile,'w')    

    f.write(str(topo.shape[0])+ ' # geophone locations\n')
    for i in range(0,topo.shape[0]):
        f.write(str(topo[i,0])+' ' +str(topo[i,1]) + '\n')

    f.write(str(gimliPicks.shape[0]) + ' #measurements\n')
    f.write('#s g t err\n')     
    for i in range(0,gimliPicks.shape[0]):
        offset = np.abs(geoLocs[int(gimliPicks[i,0]-1)]-geoLocs[int(gimliPicks[i,1]-1)])
        err2write = maxError*offset/lineLength + minError
        str2Write = '{:4d} {:4d} {:0.5f} {:0.5f} \n'.format(int(gimliPicks[i,0]), int(gimliPicks[i,1]),gimliPicks[i,2],err2write)
        f.write(str2Write)
    f.close()

    return gimliInFile

def calcUnitVector(SoL,EoL):
    de = EoL[0] - SoL[0]
    dn = EoL[1] - SoL[1]
    mag = np.sqrt(de**2 + dn**2)
    ux = de/mag
    uy = dn/mag

    return ux,uy

def spatiallyLocateVTKFile(vtkFile,SoL,EoL):

    file1 = open(vtkFile, 'r') 
    outFileName = vtkFile.split('.')[0] + '_UTM.vtk' 
    file2 = open(outFileName,'w')
    count = 0

    ux,uy = calcUnitVector(SoL,EoL)

    while True: 
        count += 1

        # Get next line from file 
        line = file1.readline() 
        if count == 5:
            nPoints = line.split(' ')
            nPoints = int(nPoints[1]) + count
            print(nPoints)

        if count > 5 and count <= nPoints:
            #print(line)
            tempLine = line.split('\t')
            xDist =float(tempLine[0])
            elev = float(tempLine[1])
            easting = xDist*ux + SoL[0]
            northing = xDist*uy + SoL[1]
            line2Write = '{0:10.2f}\t{1:10.2f}\t{2:10.2f}\n'.format(easting,northing,elev)
            file2.write(line2Write)
            #print(line2Write)
        else:
            file2.write(line)

        # if line is empty 
        # end of file is reached 
        if not line: 
            break

    file1.close() 
    file2.close()

TLV_2016_0816_picks_gimliInput.txt TLV_2016_0816_picks.txt TLV_2016_0816_topoLiDARm.txt

halbmy commented 3 years ago

Way of installation: manual compilation from source ( I changed to V1.1.0 to match what can be installed via conda) Downloaded here: https://github.com/gimli-org/gimli/tree/v1.1.0. I would have loved to install via conda but don't have it on this computer and fear it would mess a lot of stuff up.

First, congratulation to sucessfully compile the code from the source! Did you experience any difficulties or are there any points where we should improve the installation instructions?

As to conda: if you install Miniconda locally within your home directory, you should not mess anything up, particularly when working with environments. Maybe you can try installing with conda as an alternative way.

bflinchum commented 3 years ago

Hey @halbmy, Yes, there are some compilation errors. If you download the master branch it won't compile on mac.

Just some background. I use macports to manage and install all of my packages not brew. Initially when compiling (before the conda release) the trick was getting boost installed correctly. It took me a while to figure out but now that I have it it's not too bad to compile. To solve the boost issue I ended up adding a soft link to the build directory. So here are the steps I follow to compile the code:

#Create a soft link to the boost libraries in the build directory
ln -s /opt/local/include/boost boost

#Paths are do libraries I located on my computer (opt/local is the macports directory)
cmake ../ -DPYTHON_EXECUTABLE="/opt/local/bin/python3" -DPYTHON_INCLUDE_DIR="/opt/local/Library/Frameworks/Python.framework/Versions/3.7/include/python3.7m" -DPYTHON_LIBRARY="/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7.dylib" -DPY_Numpy="/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/numpy"
make -j 8
make pygimli

I know that you must have a solution to the problem because if I download the version 1.1.0 that is under versions everything compiles fine. However, if I download the current master branch and follow the same steps I get an error on the "make pygimli". I know that this error has been flagged because there is a comment written into the .cpp file!:

// FW: Caused problems during Mac build.

I am not exactly sure what a "type error" is. If this information is helpful and you want more please let me know. I can copy all of the output from my terminal in here so you can see warnings and things too.

[ 83%] Building CXX object core/python/CMakeFiles/_pygimli_.dir/generated/custom_rvalue.cpp.o
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:50:17: error: call to 'type' is ambiguous
             << GIMLI::type(ValueType(0)) << ")" << typeid(ValueType).name())
                ^~~~~~~~~~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:34:52: note: expanded from macro '__DC'
#define __DC(str) if (GIMLI::deepDebug() > 0) __MS(str)
                                                   ^~~
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:127:41: note: expanded from macro '__MS'
#define __MS(str) std::cout << "*** " <<str << " " << WHERE << std::endl;
                                        ^~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:340:16: note: in instantiation of function template specialization
      'r_values_impl::checkConvertibleSequenz<unsigned long>' requested here
        return checkConvertibleSequenz<GIMLI::Index>(obj);
               ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:409:20: note: candidate function
inline std::string type(const bool          & var) { return "bool"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:410:20: note: candidate function
inline std::string type(const int32         & var) { return "int32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:411:20: note: candidate function
inline std::string type(const int64         & var) { return "int64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:412:20: note: candidate function
inline std::string type(const uint32        & var) { return "uint32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:413:20: note: candidate function
inline std::string type(const uint64        & var) { return "uint64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:416:20: note: candidate function
inline std::string type(const float         & var) { return "float"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:417:20: note: candidate function
inline std::string type(const double        & var) { return "double"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:50:17: error: call to 'type' is ambiguous
             << GIMLI::type(ValueType(0)) << ")" << typeid(ValueType).name())
                ^~~~~~~~~~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:34:52: note: expanded from macro '__DC'
#define __DC(str) if (GIMLI::deepDebug() > 0) __MS(str)
                                                   ^~~
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:127:41: note: expanded from macro '__MS'
#define __MS(str) std::cout << "*** " <<str << " " << WHERE << std::endl;
                                        ^~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:368:16: note: in instantiation of function template specialization
      'r_values_impl::checkConvertibleSequenz<long>' requested here
        return checkConvertibleSequenz<GIMLI::SIndex>(obj);
               ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:409:20: note: candidate function
inline std::string type(const bool          & var) { return "bool"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:410:20: note: candidate function
inline std::string type(const int32         & var) { return "int32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:411:20: note: candidate function
inline std::string type(const int64         & var) { return "int64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:412:20: note: candidate function
inline std::string type(const uint32        & var) { return "uint32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:413:20: note: candidate function
inline std::string type(const uint64        & var) { return "uint64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:416:20: note: candidate function
inline std::string type(const float         & var) { return "float"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:417:20: note: candidate function
inline std::string type(const double        & var) { return "double"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:558:13: error: call to 'type' is ambiguous
            GIMLI::type(ValueType(0))) // FW: Caused problems during Mac build
            ^~~~~~~~~~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:34:52: note: expanded from macro '__DC'
#define __DC(str) if (GIMLI::deepDebug() > 0) __MS(str)
                                                   ^~~
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:127:41: note: expanded from macro '__MS'
#define __MS(str) std::cout << "*** " <<str << " " << WHERE << std::endl;
                                        ^~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:644:16: note: in instantiation of function template specialization
      'r_values_impl::checkConvertibleNumpyScalar<long>' requested here
        return checkConvertibleNumpyScalar< GIMLI::SIndex >(obj);
               ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:409:20: note: candidate function
inline std::string type(const bool          & var) { return "bool"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:410:20: note: candidate function
inline std::string type(const int32         & var) { return "int32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:411:20: note: candidate function
inline std::string type(const int64         & var) { return "int64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:412:20: note: candidate function
inline std::string type(const uint32        & var) { return "uint32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:413:20: note: candidate function
inline std::string type(const uint64        & var) { return "uint64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:416:20: note: candidate function
inline std::string type(const float         & var) { return "float"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:417:20: note: candidate function
inline std::string type(const double        & var) { return "double"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:558:13: error: call to 'type' is ambiguous
            GIMLI::type(ValueType(0))) // FW: Caused problems during Mac build
            ^~~~~~~~~~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:34:52: note: expanded from macro '__DC'
#define __DC(str) if (GIMLI::deepDebug() > 0) __MS(str)
                                                   ^~~
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:127:41: note: expanded from macro '__MS'
#define __MS(str) std::cout << "*** " <<str << " " << WHERE << std::endl;
                                        ^~~
/Users/bflinch/PyGIMLi/V1p1a/build/core/python/generated/custom_rvalue.cpp:656:16: note: in instantiation of function template specialization
      'r_values_impl::checkConvertibleNumpyScalar<unsigned long>' requested here
        return checkConvertibleNumpyScalar< GIMLI::Index >(obj);
               ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:409:20: note: candidate function
inline std::string type(const bool          & var) { return "bool"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:410:20: note: candidate function
inline std::string type(const int32         & var) { return "int32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:411:20: note: candidate function
inline std::string type(const int64         & var) { return "int64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:412:20: note: candidate function
inline std::string type(const uint32        & var) { return "uint32"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:413:20: note: candidate function
inline std::string type(const uint64        & var) { return "uint64"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:416:20: note: candidate function
inline std::string type(const float         & var) { return "float"; }
                   ^
/Users/bflinch/PyGIMLi/V1p1a/core/src/gimli.h:417:20: note: candidate function
inline std::string type(const double        & var) { return "double"; }
                   ^
4 errors generated.
gmake[7]: *** [core/python/CMakeFiles/_pygimli_.dir/build.make:1772: core/python/CMakeFiles/_pygimli_.dir/generated/custom_rvalue.cpp.o] Error 1
gmake[6]: *** [CMakeFiles/Makefile2:654: core/python/CMakeFiles/_pygimli_.dir/all] Error 2
gmake[5]: *** [CMakeFiles/Makefile2:742: core/python/CMakeFiles/pg.dir/rule] Error 2
gmake[4]: *** [Makefile:430: pg] Error 2
make[3]: *** [pgbuild] Error 2
make[2]: *** [core/python/CMakeFiles/pgbuild.dir/all] Error 2
make[1]: *** [CMakeFiles/pygimli.dir/rule] Error 2
make: *** [pygimli] Error 2
carsten-forty2 commented 3 years ago

e8deec6 will hopefully fix the compilation issue.

I tried your script (needed to move the __main__ scope to the end of the file) but can't reproduce the maxIter issue.

ra.inv.maxIter = 1

or

ra.invert(data=ra.data,mesh=mesh,lam=smoothing,zWeight=vertWeight,startModel=1./avgVel, 
              verbose=True, maxIter=1)

work both for me as supposed (on the dev branch). I set the verbose=True to see the iteration counting stops at 1. For the inversion the run of at least one iteration is forced every time.

Try the keyword argument 'startModel' instead of 'startmodel' (we came from cpp and decided for camel shape as design pattern, which is sadly not always unambiguous). The inversion log info tells you which starting model is used.

To only show the ray paths, it should be sufficient to call ra.showRayPaths with an appropriate model argument. Assuming ra knows a mesh and data after one prior run, It will call the response for any model that differ from the last active known model. Same for ra.drawRayPaths which also have the model argument. https://www.pygimli.org/pygimliapi/_generated/pygimli.physics.traveltime.html?highlight=showraypath#pygimli.physics.traveltime.TravelTimeManager.showRayPaths

Exporting these raypaths to VTK might be possible too .. I can think about a snipped if you want them.

Hope this helps for some of your questions .. please ask if I miss some points (its a little late now to have the complete overview)

Cheers, Carsten

sschennen commented 3 years ago

Here is a suggestion for ray paths from your first sensor considering some code of the refraction example in 3D and the MultiBlock object of pyvista:

#(...)
#Trace the rays through the average model:
ra = TravelTimeManager(data)
mesh = ra.createMesh(data=ra.data,paraDepth=maxDepth,paraDX=meshDx,paraMaxCellSize=maxTriArea)
matModVel = np.genfromtxt(path2Runs+'allVelocityModels.txt')
fop = pg.core.TravelTimeDijkstraModelling(mesh, ra.data)
dij = pg.core.Dijkstra(fop.createGraph(1 / matModVel[:,-1]))
arrSensors = np.array(data.sensorPositions())

import pyvista as pv

mbRays = pv.MultiBlock()
dij.setStartNode(mesh.findNearestNode(arrSensors[0,:]))

for arrSens in arrSensors:
    ni = dij.shortestPathTo(mesh.findNearestNode(arrSens))
    pos = mesh.positions(withSecNodes=True)[ni]
    segs = np.zeros((len(pos), 3))
    segs[:, 0] = pg.x(pos)
    segs[:, 1] = pg.y(pos)
    segs[:, 2] = pg.z(pos)
    mbRays.append(pv.lines_from_points(segs,close=False))
mbRays = mbRays.combine()
pv.save_meshio(os.path.join(path2Runs,'rayShots_0.vtk'),mbRays)

Cheers, Stephan

halbmy commented 3 years ago

There hasn't been any activity on this issue recently and was wondering whether the compilation or compatibility problem was solved through version 1.2. If not we should make a new issue.

As to the problem of ray paths, we should have a new look at the problem using version 1.2. I started doing that but I get confused by errors saving and loading files. I am attaching my script. If the problem persists, start from there, if not, we can close the issue. pg289.py.txt

halbmy commented 2 years ago

There hasn't been any activity since a while but I guess we all have learned a lot, so I close it. Thanks @bflinchum for the input.