Hi Ameer
Apologies for the late reply on this, got lost due to the timing of this post.
This depends on how you want to setup the calibration/validation periods and if they are connected. If you want to generate calib/valid metrics separately and not use a warm up period in validation, then yes, the solution.rvc should be used to start the validation model (assuming that calibration period is first).
If the periods are treated independently and you want to evaluate a validation period with your model for any given validation period, then using the exact solution.rvc or a more generic initial condition is less critical, but either way you should have a warm up period.
Two other ways you can overcome this is:
- using observation weights to determine for which days metrics are calcualted, or more easily
- in newer versions of Raven, the :EvaluationPeriod argument in the rvi file allows you to define these periods within the same model run, so you can evaluate calibration and validation metrics in one run of Raven. For example:
Code: Select all
:EvaluationPeriod CALIBRATION 2003-10-01 2005-09-30
:EvaluationPeriod VALIDATION 2005-10-01 2007-09-30
:EvaluationPeriod ANOTHER_PERIOD 2004-11-01 2006-03-30
Will allow you calculate separate metrics for different periods within the same model run, avoiding the need for using the solution.rvc file. In this case you should still provide a reasonable starting condition+warm up period before the first evaluation period. The resulting Diagnostics.csv file will have the usual reported metrics, as well as another row of metrics for each defined period.
Cheers,
Rob