-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple CPU interop fixes for serialization and cloning #6223
base: branch-25.04
Are you sure you want to change the base?
Conversation
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
/ok to test |
def __sklearn_clone__(self): | ||
return _clone_parametrized(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes the problem of the cloned estimator not being a wrapped estimator. But it doesn't solve the problem that (some of) the hyperparameters get changed in the process. The other thing I dont like about this is that I don't understand why it solves the problem. clone
should do exactly this, at least that is what it looks like to me in https://github.com/scikit-learn/scikit-learn/blob/a4225f305a88eea7bababbfa2ff479a118406c93/sklearn/base.py#L93-L95
Hyper-parameter problem:
from sklearn.decomposition import PCA
from sklearn import clone
pca = PCA(n_components=42, svd_solver="arpack")
pca2 = clone(pca)
print(pca2)
# -> wrapped <class 'sklearn.decomposition._pca.PCA'>
pca2.get_params()
# -> {'handle': <pylibraft.common.handle.Handle object at 0x7fa50b460c90>, 'verbose': 4, 'output_type': 'numpy', 'copy': True, 'iterated_power': 15, 'n_components': 42, 'svd_solver': 'full', 'tol': 1e-07, 'whiten': False, 'random_state': None}
the solver is set to "full"
:-/ But this is already a problem before you clone
the estimator (get_params
returns the wrong thing already before cloning)
So I think we need to understand this a bit better. Because if get_params
works correctly and clone gets passed the right thing (with the correct __class__
it should "just work"(tm)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One improvement would be to not delegate __sklearn_clone__
to the original estimator in
cuml/python/cuml/cuml/internals/base.pyx
Line 890 in f60b5f0
if GlobalSettings().accelerator_active or self._experimental_dispatching: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And probably the reason why get_params
returns the wrong things is that we use the one on the cuml estimator instead of on the scikit-learn one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hyperparam change that you mention is not an error, is the intentional change we do to translate the call from CPU to GPU:
cuml/python/cuml/cuml/decomposition/pca.pyx
Line 282 in f60b5f0
"arpack": "full", |
the hyperparameters now are correct. And get_params is returning the params of the proxy indeed, which has correct n_components=42
.
The real question is the one you pose indeed, what do we need get_params
to return and why. But the cloning is now working correctly, the get_params is a separate question
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think get_params
should return the variables and values that you'd expect as a scikit-learn user. This means the return value should be the same when the accelerator is turned on and when it is turned off.
My thinking is:
- the scikit-learn API is the boundary layer: below we can do what we want to make the accelerator work, above it should all look "just like scikit-learn"
- if we return something different we need to explain to users that we do this, why, etc
- the more we make different, even with good reason, the higher the chances something somewhere that relies on this breaks
- it will make the translator simpler to reason about, because the input will only ever be user provided values (not sometimes user values and sometimes values that have already been translated)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did the changes, now get_params and set_params behave as you describe. And removed the usage of sklearn_clone as per the suggestion.
Need to check in pytests before we merge, but could use another quick rereview @betatim to see if you find any other fundamental issues with the approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that it is kinda late and pushed some changes, hope I didn't break something in the last couple commits
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Pushed a commit with two tests (credit for some of them to my friend Cursor)
…get/set_params behave as expected matching wrapped estimator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/ok to test |
def get_params(self, deep=True): | ||
""" | ||
If accelerator is active, we return the params of the CPU estimator | ||
being helf by the class, otherwise we just call the regular |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
being helf by the class, otherwise we just call the regular | |
being held by the class, otherwise we just call the regular |
def set_params(self, **params): | ||
""" | ||
For setting parameters, when the accelerator is active, we translate | ||
the parameters to set the GPU params, and also update the | ||
params of the CPU class. Otherwise dispatching to the CPU class after | ||
updating params of the GPU estimator will dispatch to an estimator | ||
with outdated params. | ||
""" | ||
self._cpu_hyperparams_dict.update(params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This call will fail with an AttributeError
when _cpu_hyperparams_dict
was not initialized yet.
The issue is that this new attribute is only initialized within the ProxyEstimator
class, but used within the UniversalBase
class. We should probably initialize it within the base class.
""" | ||
If accelerator is active, we return the params of the CPU estimator | ||
being helf by the class, otherwise we just call the regular | ||
get_params of the Base class. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Triggered by Simon's comment: should this maybe be a comment instead of a docstring? It reads more like a comment for other devs, and less like documentation for users.
Below a suggestion with the get_params
docstring from scikit-learn
""" | |
If accelerator is active, we return the params of the CPU estimator | |
being helf by the class, otherwise we just call the regular | |
get_params of the Base class. | |
""" | |
""" | |
Get parameters for this estimator. | |
Parameters | |
---------- | |
deep : bool, default=True | |
If True, will return the parameters for this estimator and | |
contained subobjects that are estimators. | |
Returns | |
------- | |
params : dict | |
Parameter names mapped to their values. | |
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I much prefer this doc-string and I think there is value in consistent documentation across all components, but we should also recognize that these are internal components and their documentation is not directed at users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_params
is part of the official API of a scikit-learn estimator just like fit
or predict
. So I think it deserves a good docstring, especially as it is free to get one
No description provided.