Help with batch processing

Dear all,

I'm trying to create a loop within the carbon model to test the effect of LULC changes and different carbon prices/discount rates on the value of the ES. Since I'm a beginner with python, I followed the tutorial for batch processing (https://invest.readthedocs.io/en/latest/scripting.html#creatingsamplepythonscripts) and tried to implement the examples (Example: Threshold Flow Accumulation Parameter Study, and the Example: Invoke NDR Model on a directory of Land Cover Maps) using the training data available for the NDR model (Please find attached the py scripts).

However, after run the script of the first example (where I replaced the execution call by the script provided with the loop), I got the results of only one simulation, rather than the results of 10 simulations, as referred in the example.
In the second one, I have created a folder where I put the LULC maps to test and replaced the call line (modifying the directory path). In this case, I received an error:

C:\Users\ASIL>C:\Python27\python.exe C:\Users\ASIL\Desktop\testeNDR\run2\testNDR_2.py
  File "C:\Users\ASIL\Desktop\testeNDR\run2\testNDR_2.py", line 28
    landcover_dir = r'C:\Users\ASIL\Work\InVEST training data\LULC'
    ^
IndentationError: unexpected indent

I would like to know what is wrong with the scripts. Thanks in advance.

Cheers,
Ângelo

Comments

  • Hi again,

    good news, both issues are solved :)

    In the first one, I found a typo in the loop, should be args['results_suffix'] = 'accum' + str(threshold_flow_accumulation) ... instead of args['suffix'] = 'accum' + str(threshold_flow_accumulation). Then when I run the model all the simulations have been saved.

    In the second script, I have corrected the position of the line, as well as the args typo above.

    Endurance testing: Done ;)

    Cheers,
    Ângelo
  • jdouglassjdouglass Administrator, NatCap Staff
    Great to hear you were able to get that working!
  • AngeloAngelo Member
    edited February 12
    Hi all,

    I'm testing the effect of multiple LULC scenarios over time, and C prices and discount rates on C seq and C seq value. I have tested successfully multiple combinations of C price and discount rates following the tutorial for batch processing.

    However, I couldn't find a way to loop through LULC scenarios (and repetitions)! For calculating C seq, I have to set carbon model parameters for current and future LULC (while in the tutorial the NDR model requires only one LULC file).

    I tried some things (probably not the best method, but...)... I used the os.walk() function to extract the path of each LULC file, and then, for each Run, I extracted pairs of file paths that correspond to LULC in different dates (e.g. decade_0 and decade_1) to a text file, and use them as LULC current and future inputs. However, when I try to extract the content (ie. the LULC file path) of the text file and assign it to the model args['lulc_cur_path'] and args['lulc_fut_path'] it doesn't work... Also, args[lulc_cur/fut_year] have to change according to each LULC scenario... So, many things...

    Any idea how to do this?

    Cheers,

    Ângelo
    Post edited by Angelo on
  • jdouglassjdouglass Administrator, NatCap Staff
    Hey Angelo,

    One of several issues with the script is the usage of 'print'.  In python 2.7, this will raise a SyntaxError, and in Python 3, your landcover args keys will end up having a value of 'None' (this is because the print() function returns None).

    I've put together a small python script (see attached) for how you could iterate over pairs of landcover files in the list of files you sent.
  • Hi James,

    Well, thanks for the explanation and many thanks for the script. 
    However, an error occurred (see attached file please).

    Cheers,
    Ângelo

    1796 x 936 - 319K
  • jdouglassjdouglass Administrator, NatCap Staff
    Whoops, forgot a zip() there.  So that iteration line should read:

    for current_lulc_path, future_lulc_path in zip(landcover_files[:-1], landcover_files[1:]):
Sign In or Register to comment.