Conversation
|
Required for easyscience/EasyReflectometryApp#290 |
damskii9992
left a comment
There was a problem hiding this comment.
This PR needs an ADR explaining the software architecture design implemented and the rationale for why this design was chosen etc.
We probably also need a tutorial showcasing how the callback might be used. Both for ourselves as a kind of documentation but also for potential users.
I have not bothered looking at the unit tests.
| results.y_obs = self._cached_model.y | ||
| results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p) | ||
| results.y_err = weights | ||
| results.n_evaluations = int(fit_results.nf) |
There was a problem hiding this comment.
shouldn't this be fit_results.nfev?
There was a problem hiding this comment.
No, this is correct for DFO-LS.
nf = number of objective evaluations
DFO doesn't follow SciPy style OptimizeResults API
| chi2_val = self.chi2 | ||
| reduced_val = self.reduced_chi2 | ||
| if not np.isfinite(chi2_val) or not np.isfinite(reduced_val): | ||
| raise ValueError |
There was a problem hiding this comment.
This error should probably have some context
There was a problem hiding this comment.
This ValueError is not a user-facing error. It is raised only to be caught immediately by the except Exception in line 54, after which chi2 and reduced_chi2 are rendered as N/A
There was a problem hiding this comment.
Even then, in the future we might change some code elsewhere to the catch exception and simply re-raise the error here. Without the error context, such re-raises would be hard to diagnose. It never hurts to have an descriptive error-message, even if it isn't currently being used.
There was a problem hiding this comment.
I don't think this is necessary, but I will begrudingly accept the request.
preparation for `interim_updates` branch merge
damskii9992
left a comment
There was a problem hiding this comment.
I would still like to see a tutorial or something which showcases how the progress callback might actually be used, not just a MagicMock unittest.
| results.y_obs = self._cached_model.y | ||
| results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p) | ||
| results.y_err = self._cached_model.dy | ||
| results.n_evaluations = n_evaluations |
There was a problem hiding this comment.
I probably wouldn't save n_evaluations here, but rather iterations, since what we want to report isn't how many times the fit has evaluated the model function, but rather how many iterations the fit has run.
There was a problem hiding this comment.
To be somewhat pedantic: both number of iterations and number of evaluations can be relevant parameters.
There was a problem hiding this comment.
Cross-backend consistency (what I said earlier): LMFit.nfev and DFO-LS.nf both count objective-function calls, so n_evaluations matches.
I would like to be consistent and report just this.
I will document explicitly in the BUMPS docstring that for BUMPS this is objective evaluations, not iterations.
| point=np.asarray(history.point[0]), | ||
| nllf=float(history.value[0]), | ||
| ) | ||
| self._callback(payload) |
There was a problem hiding this comment.
Didn't you plan to split this into a separate PR? The callback monitor?
There was a problem hiding this comment.
No, what I split out was the cancellation functionality. The callback monitor has always been a part of this. This is how we get the updates
| def __call__(self, history): | ||
| self.last_step = int(history.step[0]) | ||
|
|
||
|
|
There was a problem hiding this comment.
We are only supposed to have 1 class per python file. I could accept 1 additional small class, but now you're reaching 2-3 small additional classes, so I think they belong in separate files in separate sub-folders.
There was a problem hiding this comment.
Those are very small, single task classes which apply directly to Bumps class functionality. Splitting them and placing in say, utils.py would mean we could reuse them elsewhere. This is not the case.
If, at some point, we find these can be instantiated by another class, I'd pull them out.
| method_str = method_dict.get('method', self._method) | ||
| fitclass = self._resolve_fitclass(method_str) | ||
|
|
||
| step_counter = _StepCounterMonitor() |
There was a problem hiding this comment.
Does this change with our latest changes #231? We don't need the minimal step counter for the FitResults now, right? Since we get the n_evaluations there.
| progress_callback: Optional[Callable[[dict], Optional[bool]]] = None, | ||
| callback: Optional[Callable[[DFOCallbackState], None]] = None, | ||
| callback_every: int = 1, | ||
| callback_on_improvement_only: bool = False, |
There was a problem hiding this comment.
You did not separate this out into a different PR, so I guess we should discuss it here. Shouldn't these options also be available in the other minimizers? :)
| 1 for parameter in params.values() if getattr(parameter, 'vary', False) | ||
| ) | ||
| degrees_of_freedom = residual_array.size - varied_parameter_count | ||
| reduced_chi2 = chi2 / degrees_of_freedom if degrees_of_freedom > 0 else chi2 |
There was a problem hiding this comment.
Same question as for DFO, does LMFIT not calculate chi2 and can't you just use that instead of calculating it again every iteration?
There was a problem hiding this comment.
Nope. lmfit's iter_cb signature is (params, iter, resid, *args, **kwargs). There is no pre-computed chi2 passed in. LMFIT's MinimizerResult.chisqr is only available after the fit completes, not during iteration. So summing residuals**2 inside the callback is the correct approach.
Added |
This pull request adds support for progress callbacks during fitting operations. The main goal is to allow users to receive progress updates and optionally cancel fits in progress. Added
progress_callbackparameter and a way to handle cancellation gracefully.Added an optional
progress_callbackparameter to the fit methods inFitter,MinimizerBase,MultiFitterand all minimizer subclasses, allowing users to receive iterative progress updates and cancel fits.Implemented progress callback integration in
LMFit: added_create_iter_callbackto wrap the user callback, constructed detailed progress payloads, and enabled fit cancellation by raising a newFitCancelledexception. Ensured parameter values are restored on cancellation or error.Added the
FitCancelledexception class to signal user-requested cancellation, and ensured proper restoration of parameter values on cancellation or error.