Skip to content

Conversation

@guy1992l
Copy link
Member

@guy1992l guy1992l commented Jan 8, 2026

Created the module langchain_mlrun - MLRun's integration with LangChain, allowing to auto-trace and monitor LangChain and LangGraph based code.

  • Added the module itself - fully documented and sphinx ready.
  • Added about 76 tests for 100% of the current functionality.
  • Added an end-to-end notebook with brief explanations and tutorial.
  • Made preparations for MLRun-CE support (using Kafka stream).
  • Wrote TODOs for future work.

yonishelach and others added 30 commits March 8, 2023 11:01
[Build] Fix html links, add <function>.html as source in documentation
[XGB-Custom] Fix test artifact key name
[XGB-Serving][XGB-Test][XGB-Trainer] Fix tests - artifact key
* [Build] Install python 3.9 when testing

* [Build] Update python version in CI

* .
* [Build] Build with python 3.9

* .
* [Noise-reduction] Add new function to hub

* fix test

* added multiprocessing and silence removal to function
* delete EOS functions

* bring back validate_great_expectations

* bring back load_dataset
Eyal-Danieli and others added 14 commits October 8, 2024 15:26
* adjust batch infer v2

* update docs in NB
* [text to audio generator] Replaced bark with openai tts models

* [text to audio generator] Fix base url env var

* fix version

* Add speech engine

* after review
* [Build] Fix html links, Add <function>.html as source in documentation

* Update CI temporarily and update index

* [XGB-Custom] Fix test artifact key name

* [XGB-Serving][XGB-Test][XGB-Trainer] Fix tests - artifact key

* [Build] Install python 3.9 when testing (mlrun#618)

* [Build] Update python version in CI (mlrun#620)

* [Build] Install python 3.9 when testing

* [Build] Update python version in CI

* .

* Revert "[Build] Update python version in CI (mlrun#620)" (mlrun#621)

This reverts commit 0cd1f15.

* Revert "[Build] Install python 3.9 when testing (mlrun#618)" (mlrun#619)

This reverts commit 3301415.

* [Build] Build with python 3.9 (mlrun#622)

* [Build] Build with python 3.9

* .

* Update requirements.txt
* fix feature_selection

* fix feature_selection

* fix feature_selection nb

* update yaml name

* fix test

* fix test
* fix arbitrary file vulnerability

* fix arbitrary file vulnerability

* fix test
* add traversal test

* add traversal test

* add traversal test
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@guy1992l guy1992l requested a review from Copilot January 8, 2026 22:50
@guy1992l guy1992l requested review from Eyal-Danieli and omermaim and removed request for Copilot January 8, 2026 22:50
Copy link
Member

@Eyal-Danieli Eyal-Danieli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very nice, had few comments

:returns: An MLRun model endpoint monitoring client.
"""
if mlrun.mlconf.is_ce_mode():
return _KafkaMLRunEndPointClient(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing monitoring_broker and monitoring_topic params

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its a "hachana lemazgan"... there is a TODO basically anywhere related to Kafka usage.

) from value_error
except ImportError as import_error:
raise ImportError(
f"Could not import '{module_path}'. Tried to import '{module_name}' and failed with the following "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

module_name might be referenced before assignment (also object_name in the next error)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its not, Import Error cannot be raised during split so the module name is there, same for object name.

Comment on lines +54 to +60
self,
tools: Sequence[
dict[str, Any] | type | Callable | BaseTool # noqa: UP006
],
*,
tool_choice: str | None = None,
**kwargs: Any,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't find any usage for that func

Suggested change
self,
tools: Sequence[
dict[str, Any] | type | Callable | BaseTool # noqa: UP006
],
*,
tool_choice: str | None = None,
**kwargs: Any,
*

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used under the hood by create_agent in LangChain.

response = model.invoke(state["messages"])
return {"messages": [response], "attempts": state["attempts"] + 1}

def reflect_node(state: AgentState):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not in use

Suggested change
def reflect_node(state: AgentState):
def reflect_node():

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the nodes pattern in LangGraph. I want to mimic real flow.

"""
run_func, expected_events = run_suites

with mlrun_monitoring(settings=manual_mode_settings) as tracer:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
with mlrun_monitoring(settings=manual_mode_settings) as tracer:
with mlrun_monitoring(settings=manual_mode_settings):

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is used like 2 rows ahead in 451:

assert len(tracer.settings.monitor.debug_target_list) == expected_events

Copy link
Member

@Eyal-Danieli Eyal-Danieli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plus add copyright

@guy1992l guy1992l requested a review from Eyal-Danieli January 11, 2026 21:30
Copy link

@omermaim omermaim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me

@@ -0,0 +1,23 @@
apiVersion: v1
categories:
- langchain
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Eyal-Danieli, which categories do we support?
Should it be on general for now?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guy1992l do we have an option to deploy the agent code into the user cluster?

@@ -0,0 +1,3 @@
pytest
langchain
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worth pointing out to a specific version, to make sure the function do not change between package releases

@@ -0,0 +1,899 @@
{
Copy link
Collaborator

@GiladShapira94 GiladShapira94 Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can generate the stream path by using the project name of the user or this env MLRUN_ACTIVE_PROJECT in addition, model monitoring always uses the project container so we can just set these values as default.

In this example, the user does not create any serving function, so we can just set the value under the hood if the user does not provide one.

I think we can remove this line - This is a temporary workaround until custom endpoint creation support is added to MLRun.

Once MLRun will add this feature, we will update the notebook


Reply via ReviewNB

@@ -0,0 +1,899 @@
{
Copy link
Collaborator

@GiladShapira94 GiladShapira94 Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. We should use the same requirements.txt file. If it should be a different file I think it would be better to create an application_requirements.txt
  2. Maybe worth to write another step that invoke the chain after the application deployed


Reply via ReviewNB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants