-
-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GMM #113
Comments
That'd certainly be in scope. I likely won't have time to work on this until the end of the month, but may be able to after that. Do you have any other references for parallel or distributed GMM? That paper doesn't seem to be publicly available. |
The paper is here: |
Gave a quick skim of scikit-learn's implementation. A translation of that to use work on dask arrays doesn't look too difficult. Unless I missed something, the fanciest thing was a cholesky decomposition, which is implemented in @gmaze do you have any interest in working on this? |
I think it would be interesting to see how Dask's cholesky factorization
behaves here, but it may be that other algorithms more suited to large
distributed datasets exist. A literature search is possibly still
warranted here.
…On Tue, Jan 9, 2018 at 10:15 AM, Tom Augspurger ***@***.***> wrote:
Gave a quick skim of scikit-learn's implementation. A translation of that
to use work on dask arrays doesn't look *too* difficult. Unless I missed
something, the fanciest thing was a cholesky decomposition, which is
implemented in dask.array.
@gmaze <https://github.com/gmaze> do you have any interest in working on
this?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#113 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AASszIEBiw3bvMN0aIOrbaMmh1r7P-g-ks5tI5CRgaJpZM4RXCe3>
.
|
Agreed.
On Tue, Jan 9, 2018 at 10:17 AM, Matthew Rocklin <[email protected]>
wrote:
… I think it would be interesting to see how Dask's cholesky factorization
behaves here, but it may be that other algorithms more suited to large
distributed datasets exist. A literature search is possibly still
warranted here.
On Tue, Jan 9, 2018 at 10:15 AM, Tom Augspurger ***@***.***>
wrote:
> Gave a quick skim of scikit-learn's implementation. A translation of that
> to use work on dask arrays doesn't look *too* difficult. Unless I missed
> something, the fanciest thing was a cholesky decomposition, which is
> implemented in dask.array.
>
> @gmaze <https://github.com/gmaze> do you have any interest in working on
> this?
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#113 (comment)>, or
mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/
AASszIEBiw3bvMN0aIOrbaMmh1r7P-g-ks5tI5CRgaJpZM4RXCe3>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#113 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABQHIm_8eKLCGrJzJH4OXghsqhKvNhmWks5tI5EigaJpZM4RXCe3>
.
|
I would surely have interest in working on this but have no timeline before the end of February and would certainly need a lot of help in order to follow the dask-ml code logic At this point, I don't quite yet understand where the need for a specific distribution method arises, ie why people publish papers on new GMM algorithm vs distribute the bottleneck operation of the classic EM algorithm for a GMM (which is, as you pointed, the cholesky factorization of the covariance matrices) The first step, may be to try the benchmark the regular GMM EM algorithm with and without specific dask-ml optimized operators |
Hi all. I would like to flag my interest in this project as well. It doesn't look like there has been much activity in this area lately. Does anyone have plans to work on this issue in the near-term future? I would be interested in contributing, but like gmaze I would need help getting started. |
I didn't get the time to work on this yet because I wanted to focus on releasing a clean version of http://github.com/obidam/pyxpcm , which now implement the choice of 2 stats backend (scikit-learn or dask_ml). |
Thanks for the update @gmaze. |
Hi, I made a bit of literature search on my side. IMO the resource mentioned by @gmaze is a good start, but it's basically a re-implementation designed to reduce data exchange on cluster of machines. Quoting page 2
I suggest to use a different methodology. One concept that I find particularly interesting is called coreset Coresets have already proven to be useful for large scale modeling of Gaussian Mixture, as well as K-means and K-median clustering proposed solution
I believe this methodology is compatible with the current philosophy of dask-ml ("re-implement at scale if required, or simply allow sklearn estimators to scale with a different methodology). It can also benefit other methods, not only GMMs Regards, References |
Thanks for sharing @remiadon. One API question around your proposed
I see the suggestion of a method like >>> model = Coreset(sklearn.mixture.GaussianMixture())
>>> model.fit(big_X, big_y) # extracts the coreset, fits the weighted(?) sklearn GMM on the coreset (small, in memory)
>>> model.predict(big_X) # Dask Array of predictions |
@TomAugspurger, a Coreset meta-estimator would be great ! Another way of achieving an equivalent goal would be to implement a CoresetTransformer that would return the data fully transformed (the set of points, weighted). But as far as I know sklearn prohibits having a different number of rows between intput/output of a transformer ... Any of those solutions suits me, I can try submitting a PR |
Yes, the Transformer would also work well, but would I think require scikit-learn/enhancement_proposals#15. I haven't read through that in a while, but I don't know how it proposes to deal with weights. Anyway, I think for now an implementation using a metaestimator would be most welcome. I think the logic of selecting the coreset is likely to be the most difficult part, regardless of the API :) |
I created a PR here #799 This is work in progress for now, as most of the sampling methods were designed for KMeans, and usage with Gaussian Mixture is still a bit obscure to me. |
Hi all,
Is there any plan to implement a parallel version of GMM (Gaussian Mixture Modelling) ?
Thanks
g
eg: http://dx.doi.org/10.1109/CSAE.2012.6272849
The text was updated successfully, but these errors were encountered: