NLPAR Examples#

[1]:
from pyebsdindex import nlpar
[2]:
file0 = '~/Desktop/SLMtest/scan2v3.up1'
[3]:
nlobj = nlpar.NLPAR(file0,lam=0.9, searchradius=3)

As always, the search radius is the 1/2 window search size. So the full window will be (2*searchradius+1)^2 or in this case 7x7 patterns (including the center pattern of interest).

Estimating a value for lambda#

A value of 0.9 is a pretty good guess for lambda for relatively noisy 80x80 patterns. But one can get a customized value of lambda by running an optimization that examines what value of lambda would provide a certain amount of reduction in the weight of the pattern of interest for a nearest-neighborhood search window. The idea being that for most scans, nearly all the neighboring patterns are nearly identical other than noise. Thus - the weigh of the pattern of interest is a measure of how much the neighboring patterns are contributing. Three optimized weights are considered (by default: [0.5, 0.34, 0.25]). Hueristically we have found that the 0.34 provides a reasonable estimate – the other two values are provided as something that represents a resonable range for lambda (lower lambda is less neighbor averaging, higher is more neighbor averaging).

Note, this will also calculate a per-point estimation of the noise in each pattern (sigma) which will be automatically stored in the nlobj for use in the NLPAR calculations.

chunksize = int (default: 0) These are the number of rows from the EBSD scan to process at one time. The default (set to 0) is to examine the number of columns in the scan, and estimate a chunk size that is approximately 4GB. No promises.
automask = True (default: True) will place a circular mask around the pattern (using the diamter of the shorter of the two pattern edges). autoupdate = True (default: True) will update the lambda value in the nlobj class with the optimized version.
backsub = True (default: False) will perform a basic background subtraction on the patterns before elvaluation. Average pattern is calculated per pattern chunk.
saturation_protect = True (default:True) this will exclude the pixels that have the maximum brightness value from the calculation (maximum value again calculated per pattern chunk).
[4]:
nlobj.opt_lambda(chunksize = 0, automask = True, autoupdate=True, backsub = False)
Chunk size set to nrows: 278
Block 0
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
Block 278
Block 556
Block 834
Range of lambda values:  [0.65239258 0.90292969 1.15952148]
Optimal Choice:  0.9029296874999998

Executing NLPAR#

Now that there are reasonable estimates for sigma and lambda, one can execute NLPAR. With the default values and using ‘UP’ EDAX files, a new file will be created to store the result with the pattern [filename]lam[x.xx]sr[x]dt[x.x].up[1/2].

The user can override the filename if they want - but should not overwrite the original data filename (no protections are provided).

[6]:
nlobj.searchradius = 4
nlobj.calcnlpar(chunksize=0,searchradius=None,lam = None, saturation_protect=True,automask=True,
                filename=None, fileout=None, backsub = False)
Chunk size set to nrows: 278
0.90292966 4 0.0
Block 0
Block 278
Block 556
Block 834
[5]:
nlobj.calcnlpar(chunksize=0,searchradius=None,lam = None, saturation_protect=True,automask=True,
                filename=None, fileout='/tmp/dave.up1', backsub = False)
Chunk size set to nrows: 278
0.90292966 3 0.0
Block 0
Block 278
Block 556
Block 834
[6]:
nlobj.calcnlpar(chunksize=0,searchradius=11,lam = 1.2, saturation_protect=True,automask=True,
                filename=None, fileout='/tmp/dave2.up1', backsub = False)
Chunk size set to nrows: 278
1.2 11 0.0
Block 0
Block 278
Block 556
Block 834
[ ]: