-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Allow multiprocessshared to spawn process and delete directly with obj #37112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
R: @damccorm |
Summary of ChangesHello @AMOOOMA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
| self.__dict__.update(state) | ||
|
|
||
| def __getstate__(self): | ||
| return self.__dict__ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this is so that this is pickleable, but is it valid? Normally I'd expect this to not be pickleable since the proxy objects aren't necessarily valid in another context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is exactly what was needed for the pickling stuff. It does seems to be valid in testing with the custom built beam version loaded on custom container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would only be valid if you unpickle onto the same machine (and maybe even in the same process). Could you remind me what unpickling issues you ran into?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just tried removing these and run the test locally, it's this infinite recursion thing that will happen if i have a proxy on a proxy
<string>:2: in make_proxy
???
../../../../.pyenv/versions/3.11.14/lib/python3.11/multiprocessing/managers.py:822: in _callmethod
kind, result = conn.recv()
^^^^^^^^^^^
../../../../.pyenv/versions/3.11.14/lib/python3.11/multiprocessing/connection.py:251: in recv
return _ForkingPickler.loads(buf.getbuffer())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
return getattr(self._proxyObject, name)
^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
return getattr(self._proxyObject, name)
^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
return getattr(self._proxyObject, name)
^^^^^^^^^^^^^^^^^
E RecursionError: maximum recursion depth exceeded
!!! Recursion detected (same locals & position)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by proxy on proxy I meant that first a MultiProcessShared object is created and the instance initialized inside it also try to create multiprocessshared objects. So for example like this test
class SimpleClass:
def make_proxy(
self, tag: str = 'proxy_on_proxy', spawn_process: bool = False):
return multi_process_shared.MultiProcessShared(
Counter, tag=tag, always_proxy=True,
spawn_process=spawn_process).acquire()
def test_proxy_on_proxy(self):
shared1 = multi_process_shared.MultiProcessShared(
SimpleClass, tag='proxy_on_proxy_main', always_proxy=True)
instance = shared1.acquire()
proxy_instance = instance.make_proxy()
self.assertEqual(proxy_instance.increment(), 1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The stacktrace unfortunately stops here, the above also doesn't have more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean we're also double proxying the data (once from client to model manager, once to model manager process)?
Otherwise we will need to make RunInference does work to manage the model instances to avoid this pattern. WDYT?
I think this is ok - it shouldn't need to be a ton of code (basically a "check in before inference and after inference", and I think it will end up being more efficient
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data would be directly from the client to the model, the model manager will just give the proxy object of the model instance directly to RunInference.
I also just realized we might have to have the proxy holds on to the proxy/reference of the model instances because otherwise sharing the model instances across different sdk harness will be challenging, and would ended up probably storing the same info of the proxy object like uri and port etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the model manager will just give the proxy object of the model instance directly to RunInference.
Oh right, this is why it needs pickled in the first place, since we copy over the full object.
I also just realized we might have to have the proxy holds on to the proxy/reference of the model instances because otherwise sharing the model instances across different sdk harness will be challenging, and would ended up probably storing the same info of the proxy object like uri and port etc.
I'm not following what you're saying here, I think because proxy is an overloaded term here. I think maybe you're saying that the proxy returned from the model manager to the client might not have a valid reference to the actual model, at which point we'd need to have tighter coordination with the model manager anyways. I think this is probably right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah if we return the MultiProcessShared object back it will probably ended up needing to be pickled for it to work. Unless the model manager doesn't hand back the model instance and only the tag, which then RunInference will try to create a MultiProcessShared object with the tag.
Have model manager only manage and return the MPS tag might be a good way to do this, so that iiuc we won't need to pickle the model instance and saves a lot of RAM here. Although the tradeoff is just that model manager is less usable by other services if any. We can discuss in the meeting and see what to do best next. Thanks!
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant enhancement to MultiProcessShared by allowing it to spawn a dedicated server process and providing a mechanism for forceful deletion. The implementation is robust, incorporating features like a "suicide pact" for server process lifecycle management and detailed error reporting from the child to the parent process. The accompanying tests are thorough, covering various edge cases. I have a few suggestions to further improve the code, mainly around removing a redundant line of code, enhancing logging in exception handlers, and fixing a minor bug in the test setup.
|
From the linter: Could you please fix the import order? Other test failures look like flakes to me and will hopefully resolve on a new commit (our test infra has been sad today 😢 ) |
damccorm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than linting and one question, LGTM
|
|
||
| def singletonProxy_unsafe_hard_delete(self): | ||
| assert self._SingletonProxy_valid | ||
| self._SingletonProxy_entry.unsafe_hard_delete() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this piece now that we're not passing around a proxy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do its fine, it mostly depends on how you now want to track models in the manager. You can leave it for now if unsure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep technically we won't need this now since we are recreating the MPS object everytime. But probably good to have in case in the future we want some more flexibility.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.