Skip to content

Conversation

@AMOOOMA
Copy link
Contributor

@AMOOOMA AMOOOMA commented Dec 15, 2025


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@AMOOOMA
Copy link
Contributor Author

AMOOOMA commented Dec 15, 2025

R: @damccorm

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @AMOOOMA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the MultiProcessShared utility by introducing the capability to host shared objects within dedicated, spawned processes. This change provides more explicit control over the lifecycle of shared resources, allowing for direct termination and cleanup of the serving process, and includes mechanisms to prevent zombie processes and handle server startup failures gracefully.

Highlights

  • Process Spawning for Shared Objects: The MultiProcessShared utility now supports hosting shared objects within dedicated, spawned processes, controlled by a new spawn_process boolean parameter in its constructor.
  • Direct Object Deletion: A new unsafe_hard_delete method has been introduced, allowing for the explicit termination and cleanup of the shared object and its serving process, providing more granular control over resource lifecycle.
  • Robust Process Management: The newly spawned server processes include a 'Suicide Pact' monitor, ensuring they automatically terminate if their parent process dies. The server startup also incorporates robust error handling and logging.
  • Zombie Process Reaping: The acquire method now includes logic to sweep and reap any finished (zombie) child processes, improving overall resource management and preventing accumulation of defunct processes.
  • Serialization Support for Proxies: The _AutoProxyWrapper class now includes __setstate__ and __getstate__ methods, enhancing the serialization capabilities of proxy objects.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

self.__dict__.update(state)

def __getstate__(self):
return self.__dict__
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this is so that this is pickleable, but is it valid? Normally I'd expect this to not be pickleable since the proxy objects aren't necessarily valid in another context

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is exactly what was needed for the pickling stuff. It does seems to be valid in testing with the custom built beam version loaded on custom container.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would only be valid if you unpickle onto the same machine (and maybe even in the same process). Could you remind me what unpickling issues you ran into?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tried removing these and run the test locally, it's this infinite recursion thing that will happen if i have a proxy on a proxy

<string>:2: in make_proxy
    ???
../../../../.pyenv/versions/3.11.14/lib/python3.11/multiprocessing/managers.py:822: in _callmethod
    kind, result = conn.recv()
                   ^^^^^^^^^^^
../../../../.pyenv/versions/3.11.14/lib/python3.11/multiprocessing/connection.py:251: in recv
    return _ForkingPickler.loads(buf.getbuffer())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
    return getattr(self._proxyObject, name)
                   ^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
    return getattr(self._proxyObject, name)
                   ^^^^^^^^^^^^^^^^^
apache_beam/utils/multi_process_shared.py:226: in __getattr__
    return getattr(self._proxyObject, name)
                   ^^^^^^^^^^^^^^^^^
E   RecursionError: maximum recursion depth exceeded
!!! Recursion detected (same locals & position)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by proxy on proxy I meant that first a MultiProcessShared object is created and the instance initialized inside it also try to create multiprocessshared objects. So for example like this test

class SimpleClass:
  def make_proxy(
      self, tag: str = 'proxy_on_proxy', spawn_process: bool = False):
    return multi_process_shared.MultiProcessShared(
        Counter, tag=tag, always_proxy=True,
        spawn_process=spawn_process).acquire()

def test_proxy_on_proxy(self):
    shared1 = multi_process_shared.MultiProcessShared(
        SimpleClass, tag='proxy_on_proxy_main', always_proxy=True)
    instance = shared1.acquire()
    proxy_instance = instance.make_proxy()
    self.assertEqual(proxy_instance.increment(), 1)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stacktrace unfortunately stops here, the above also doesn't have more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean we're also double proxying the data (once from client to model manager, once to model manager process)?

Otherwise we will need to make RunInference does work to manage the model instances to avoid this pattern. WDYT?

I think this is ok - it shouldn't need to be a ton of code (basically a "check in before inference and after inference", and I think it will end up being more efficient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The data would be directly from the client to the model, the model manager will just give the proxy object of the model instance directly to RunInference.

I also just realized we might have to have the proxy holds on to the proxy/reference of the model instances because otherwise sharing the model instances across different sdk harness will be challenging, and would ended up probably storing the same info of the proxy object like uri and port etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the model manager will just give the proxy object of the model instance directly to RunInference.

Oh right, this is why it needs pickled in the first place, since we copy over the full object.

I also just realized we might have to have the proxy holds on to the proxy/reference of the model instances because otherwise sharing the model instances across different sdk harness will be challenging, and would ended up probably storing the same info of the proxy object like uri and port etc.

I'm not following what you're saying here, I think because proxy is an overloaded term here. I think maybe you're saying that the proxy returned from the model manager to the client might not have a valid reference to the actual model, at which point we'd need to have tighter coordination with the model manager anyways. I think this is probably right.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah if we return the MultiProcessShared object back it will probably ended up needing to be pickled for it to work. Unless the model manager doesn't hand back the model instance and only the tag, which then RunInference will try to create a MultiProcessShared object with the tag.

Have model manager only manage and return the MPS tag might be a good way to do this, so that iiuc we won't need to pickle the model instance and saves a lot of RAM here. Although the tradeoff is just that model manager is less usable by other services if any. We can discuss in the meeting and see what to do best next. Thanks!

@damccorm
Copy link
Contributor

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant enhancement to MultiProcessShared by allowing it to spawn a dedicated server process and providing a mechanism for forceful deletion. The implementation is robust, incorporating features like a "suicide pact" for server process lifecycle management and detailed error reporting from the child to the parent process. The accompanying tests are thorough, covering various edge cases. I have a few suggestions to further improve the code, mainly around removing a redundant line of code, enhancing logging in exception handlers, and fixing a minor bug in the test setup.

@damccorm
Copy link
Contributor

From the linter:

ERROR: /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/apache_beam/utils/multi_process_shared_test.py Imports are incorrectly sorted and/or formatted.
ERROR: /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/apache_beam/utils/multi_process_shared.py Imports are incorrectly sorted and/or formatted.
--- /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/apache_beam/utils/multi_process_shared_test.py:before	2026-01-29 20:36:00.435928
+++ /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/apache_beam/utils/multi_process_shared_test.py:after	2026-01-29 20:46:31.213224
@@ -17,10 +17,10 @@
 # pytype: skip-file
 
 import logging
+import multiprocessing
+import os
+import tempfile
 import threading
-import tempfile
-import os
-import multiprocessing
 import unittest
 from typing import Any
 
--- /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/apache_beam/utils/multi_process_shared.py:before	2026-01-29 20:36:00.437928
+++ /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/apache_beam/utils/multi_process_shared.py:after	2026-01-29 20:46:31.298547
@@ -22,15 +22,15 @@
 """
 # pytype: skip-file
 
+import atexit
Command exited with non-zero status 1
1157.19user 21.72system 2:39.90elapsed 737%CPU (0avgtext+0avgdata 876932maxresident)k
864inputs+2880outputs (7major+2075754minor)pagefaults 0swaps
 import logging
 import multiprocessing.managers
 import os
-import time
-import traceback
-import atexit
 import sys
 import tempfile
 import threading
+import time
+import traceback
 from typing import Any
 from typing import Callable
 from typing import Dict
Skipped 445 files
lint: exit 1 (159.91 seconds) /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python> time /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/scripts/run_pylint.sh pid=1418
lint: commands_post[0]> bash /runner/_work/beam/beam/sdks/python/test-suites/tox/pycommon/build/srcs/sdks/python/scripts/run_tox_cleanup.sh
  lint: FAIL code 1 (632.19=setup[470.08]+cmd[0.00,0.37,1.43,0.05,0.30,159.91,0.04] seconds)
  evaluation failed :( (632.36 seconds)

> Task :sdks:python:test-suites:tox:pycommon:lint FAILED

Could you please fix the import order?

Other test failures look like flakes to me and will hopefully resolve on a new commit (our test infra has been sad today 😢 )

Copy link
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than linting and one question, LGTM


def singletonProxy_unsafe_hard_delete(self):
assert self._SingletonProxy_valid
self._SingletonProxy_entry.unsafe_hard_delete()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this piece now that we're not passing around a proxy?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do its fine, it mostly depends on how you now want to track models in the manager. You can leave it for now if unsure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep technically we won't need this now since we are recreating the MPS object everytime. But probably good to have in case in the future we want some more flexibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants