Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: new tests added for tsne to expand test coverage #2229

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

yuejiaointel
Copy link

@yuejiaointel yuejiaointel commented Dec 17, 2024

Description

Added additional tests in sklearnex/manifold/tests/test_tsne.py to expand the test coverage for t-SNE algorithm.

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Copy link

codecov bot commented Dec 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Flag Coverage Δ
github 83.18% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

@ethanglaser ethanglaser marked this pull request as draft December 17, 2024 19:02
@yuejiaointel
Copy link
Author

/intelci: run

@yuejiaointel yuejiaointel marked this pull request as ready for review December 19, 2024 00:00
@ethanglaser
Copy link
Contributor

/intelci: run

sklearnex/manifold/tests/test_tsne.py Show resolved Hide resolved
sklearnex/manifold/tests/test_tsne.py Outdated Show resolved Hide resolved
tsne_perplexity = TSNE(n_components=2, perplexity=9).fit(X_perplexity)
assert tsne_perplexity.embedding_.shape == (10, 2)

# Test large data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels like this one is perhaps not needed, considering that there's already a similar test earlier on with shape (100,10).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi David, I removed this test. Best, Yue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, although if the "Test reproducibility" test is also checking the shape, it still feels like "Test large data" could be removed altogether since it's not testing anything different. Or does the algorithm have some size-dependent behavior that would change between (50,10) and (1000, 50)`?

sklearnex/manifold/tests/test_tsne.py Outdated Show resolved Hide resolved
sklearnex/manifold/tests/test_tsne.py Outdated Show resolved Hide resolved
sklearnex/manifold/tests/test_tsne.py Outdated Show resolved Hide resolved
sklearnex/manifold/tests/test_tsne.py Outdated Show resolved Hide resolved
yue.jiao added 2 commits December 19, 2024 08:37
@yuejiaointel
Copy link
Author

/intelci: run

False,
),
(
"Test reproducibility",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reproducibility test here was lost.

assert (
embedding.shape == expected_shape
), f"{description}: Incorrect embedding shape."
if device_filter == "gpu":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this doesn't need to be specific to GPU.

tsne_perplexity = TSNE(n_components=2, perplexity=9).fit(X_perplexity)
assert tsne_perplexity.embedding_.shape == (10, 2)

# Test large data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, although if the "Test reproducibility" test is also checking the shape, it still feels like "Test large data" could be removed altogether since it's not testing anything different. Or does the algorithm have some size-dependent behavior that would change between (50,10) and (1000, 50)`?

"description,X_generator,n_components,perplexity,expected_shape,should_raise",
[
(
"Test basic functionality",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytest has a built-in placeholder for parametrization names - see for example here: https://docs.pytest.org/en/stable/example/parametrize.html#different-options-for-test-ids

assert np.any(
embedding != 0
), f"{description}: Embedding contains only zeros."
except Exception as e:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This try-except could be removed after switching to pytest named parametrizations.

),
(
"Edge Case: Sparse-Like High-Dimensional Data",
lambda rng: rng.random((50, 500)) * (rng.random((50, 500)) > 0.99),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For less-random reproducibility, perhaps it could add a test where one column is constant or an exact duplicate of another column.

@david-cortes-intel
Copy link
Contributor

It looks like we don't have any test here nor in daal4py that would be checking that the results from TSNE make sense beyond having the right shape and non-missingness.

Since there's a very particular dataset here for the last test, it'd be helpful to add other assertions there along the lines of checking that the embeddings end up making some points closer than others as would be expected given the input data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants