-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enhancement] limit cmake jobs for Windows backend build #2240
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Flags with carried forward coverage won't be shown. Click here to find out more. |
/intelci: run |
/intelci: run |
@@ -131,16 +131,34 @@ def custom_build_cmake_clib( | |||
cmake_args += ["-DADD_ONEDAL_RPATH=ON"] | |||
|
|||
cpu_count = multiprocessing.cpu_count() | |||
# convert to max supported pointer to a memory size in kB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not getting the idea here. sys.maxsize
returns you the maximum possible addressable memory range by the operating system in bytes, which in a modern machine would be 2^63 (expressing it in KiB would require division by 2^10).
Dividing that by 128 would not have any effect, since the available memory might be in the range of 2^33 through 2^38 or so for a modern machine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed, was confused a little about Py_ssize_t definition and how it relates to size_t. I have switched division operators to bitshifts for clarity and directness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the exact numbers that you are arriving at, is this trying to limit addressable space to accommodate a float64
casted to size_t
? What's the idea behind these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I am assuming that the underlying processes have a memory bound (not just linux or windows). I wanted a general value I could easily get out of python to have a conservative upper bound of memory that can be used. I then use that as a very large initial value for splitting it in the multiprocessing. I am using Py_ssize_t like sizeof
to yield the largest byte value possible for a platform. This at least makes all platforms to have some sort of bound without having to explicitly handle other platforms beyond windows/linux (e.g. mac).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I understanding it correctly that the purpose of this code is to spawn multiple compilation processes with a memory limit for each such that the sum would not exceed system memory? Or does this code serve some other purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I'm not sure the logic would work as intended.
It's not possible to predict beforehand how much memory a compilation process will use - the estimate here of 1GiB for example would not suffice for compiling something like DuckDB in parallel with flags like LTO (some parallel processes might take north of 4GiB with GCC). I haven't measured how much memory sklearnex takes though.
Additionally, if the processes run out of memory by a small margin, they might still succeed after switching to swap/page files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with your assessment. I think some more information about #1196 and the problem it was solving may be necessary to guide development here. My goal was just to get a TODO
out of the codebase that looked easy enough in windows to solve. Should we make a meeting before I spend too too much time on this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a reasonable compromise would be to just get total RAM in the system and divide by 4GB to get a target number of max parallel processes. I don't think the problem is approachable otherwise - new processes can be spawned and terminated by the user as the compilation is running either way, which also changes the calculation.
It's also quite an odd thing to have in a project's build script. I am not aware of any other ML library limiting the number of parallel jobs for the user based on available RAM - typically they just OOM on their own and the user then needs to adjust threads as needed.
Was there a CI failure on windows due to OOM?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No error that I saw for windows, I assume it was out of memory on Linux at some point? @Alexsandruss
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1GB constraint was chosen in #1196 based on CI VM parameters to prevent build failure due to OOM on Linux only. Change to some other working weak constraint will be fine. Also, MemFree
might be changed to MemAvailable
.
/intelci: run |
mem_per_proc = 20 # 2**20 kB or 1GB | ||
cpu_count = min(cpu_count, floor(max(1, memfree >> mem_per_proc))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
psutil
will significantly simplify code but introduces new building dependency:
mem_per_proc = 20 # 2**20 kB or 1GB | |
cpu_count = min(cpu_count, floor(max(1, memfree >> mem_per_proc))) | |
mem_per_proc = 20 # 2**20 kB or 1GB | |
cpu_count = min(psutil.cpu_count(), floor(max(1, psutil.virtual_memory().available >> mem_per_proc))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that psutil.cpu_count
can return None
.
Description
Follow on from #1196 which solves a leftover todo. This limits the cmake processes based on memory following on the procedure developed for linux builds. A default use of
sys.memsize
will set an overall limit.No performance benchmarks necessary
PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.
You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).
Checklist to comply with before moving PR from draft:
PR completeness and readability
Testing
Performance