Skip to content
This repository was archived by the owner on Apr 15, 2024. It is now read-only.
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion EndToEndExample/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -33,5 +33,5 @@ ifndef DOCKER_PREGISTRY_URL
$(error environment variable DOCKER_PREGISTRY_URL is not set. Set it like <docker-registry-url>/<namespace-org>)
endif
ifndef PROJECT_NAME
$(error environment variable PROJECT_NAME is not set. Set this to Cortex project name.)
$(error environment variable PROJECT_NAME is not set. Set this to Sensa project name.)
endif
30 changes: 15 additions & 15 deletions EndToEndExample/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
### Cortex End to End Example Demonstrating cortex-python SDK (Train and Predict Skill)
### Sensa End to End Example Demonstrating sensa-python SDK (Train and Predict Skill)

### Prerequisites
- Python 3.x
- Docker client
- Bash shell ( Power shell TBA )
- Cortex client ( installed )
- Sensa client ( installed )
- URL/Credentials for a cortex instance
- Use `make install` to install the cortex-python SDK
- Use `make install` to install the sensa-python SDK

#### Connections and Data Set
For simplicity we selected german credit dataset(find it in [data](data)) and training on this dataset.
Expand All @@ -20,17 +20,17 @@ There are certain configs and secrets like AWS Public key and Private Key we nee
Example: If you have a secret key called `awssecret` set in your project you can use it to create connections by Using `#SECURE.awssecret` as the value to one of the parameters in the Connections Object. Similary any secret key set such as `<secret_name>` can be used as `#SECURE.<secret_name>`. This needs to be updated in [config.py](config.py)

#### Train Action:
Cortex action to train and save model in Experiment.
This is implemented as a background job (Cortex Action of type Job) that trains a model on specified connection.
Sensa action to train and save model in Experiment.
This is implemented as a background job (Sensa Action of type Job) that trains a model on specified connection.
It uses the connection saved in the above step

In the end trained model is saved with metadata in Cortex Experiments to fetch and run predictions later.
In the end trained model is saved with metadata in Sensa Experiments to fetch and run predictions later.
Every time we run this Job to train model, the model will be versioned and stored with its metadata.
Later this metadata and version numbers will be used to select required model.

#### Predict Action:
This Action demonstrates fetching saved model from Cortex Experiment and running predictions on it.
For simplicity, this skill is loading the latest trained model in the Cortex Experiment(we can specify the `run_id` as well).
This Action demonstrates fetching saved model from Sensa Experiment and running predictions on it.
For simplicity, this skill is loading the latest trained model in the Sensa Experiment(we can specify the `run_id` as well).
In production environment, users can select a model at specific version or based on metadata saved.
This will load and cache the model in memory for subsequent prediction invocation.

Expand All @@ -42,28 +42,28 @@ you may change to another framework or language.
* `skill.json` Skill definition and action mapping
* `conn.json` Connection definition
* `config.py` Configuration file to update connection and project configurations and secrets
* `actions/train/train.py` Model train code to be run as Cortex Job
* `actions/train/train.py` Model train code to be run as Sensa Job
* `actions/train/requirements.txt` Python3 libraries dependencies
* `actions/train/Dockerfile` to build Docker image for train action
* `Makefile` Makefile to build and push Train and Predict Action Docker images to the specified `DOCKER_PREGISTRY_URL`
* `actions/predict/main.py` Predict code to be run as Cortex Daemon
* `actions/predict/main.py` Predict code to be run as Sensa Daemon
* `actions/predict/requirements.txt` Python3 libraries dependencies
* `actions/predict/Dockerfile` to build Docker image for predict action
* `deploy_skill.py` Uses cortex-python SDK to deploy skill and actions
* `deploy_skill.py` Uses sensa-python SDK to deploy skill and actions
* `tests/test_train.json` Sample Payload for Train Skill
* `tests/test_predict.json` Sample Payload for Predict Skill

#### Steps

A Makefile is provided to do these steps. Set environment variables `DOCKER_PREGISTRY_URL` (like `<docker-registry-url>/<namespace-org>`) and `PROJECT_NAME` (Cortex Project Name) and use Makefile to build and push docker images of Train and Predict Actions.
A Makefile is provided to do these steps. Set environment variables `DOCKER_PREGISTRY_URL` (like `<docker-registry-url>/<namespace-org>`) and `PROJECT_NAME` (Sensa Project Name) and use Makefile to build and push docker images of Train and Predict Actions.
`make deploy` will build & push Docker images for Train and Predict Actions and then deploy the skill(wrapping the actions).
Alternatively running the cells in [cortex-python_v6_example.ipynb](cortex-python_v6_example.ipynb)
Alternatively running the cells in [sensa-python_v6_example.ipynb](sensa-python_v6_example.ipynb)
will also deploy the skills and the actions.

In order to modify the actions follow the steps below:


1. Set environment variables `DOCKER_PREGISTRY_URL` (like `<docker-registry-url>/<namespace-org>`) and `PROJECT_NAME` (Cortex Project Name)
1. Set environment variables `DOCKER_PREGISTRY_URL` (like `<docker-registry-url>/<namespace-org>`) and `PROJECT_NAME` (Sensa Project Name)
2. Start by modifying the `conn.json` file updating the connection definition
3. Make sure the secrets such as AWS_PUBLIC_KEY, S3_BUCKET, FILE_NAME, API_ENDPOINT and CORTEX_TOKEN are updated in the `config.py` file
4. Modify the main executable (`main.py` by default) run by the action image's entrypoint/command to handle the action's custom logic.
Expand Down Expand Up @@ -189,4 +189,4 @@ And the Predict action output will look something like this



For more details about how to build skills go to [Cortex Fabric Documentation - Development - Develop Skills](https://cognitivescale.github.io/cortex-fabric/docs/build-skills/define-skills)
For more details about how to build skills go to [Sensa Fabric Documentation - Development - Develop Skills](https://cognitivescale.github.io/cortex-fabric/docs/build-skills/define-skills)
6 changes: 3 additions & 3 deletions EndToEndExample/actions/predict/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
Copyright (c) 2021. Cognitive Scale Inc. All rights reserved.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to update the Copyright to be from TT?


from cortex import Cortex
from cortex.experiment import Experiment
from sensa import Sensa
from sensa.experiment import Experiment
from fastapi import FastAPI

import numpy as np
Expand All @@ -25,7 +25,7 @@ def run(req: dict):
run_id = payload["run_id"]

# if model is not loaded
client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"])
client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"])

model_ctx[exp_name] = init_model(exp_name, run_id, client)

Expand Down
2 changes: 1 addition & 1 deletion EndToEndExample/actions/predict/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
pandas==1.3.0
numpy==1.22.2
cortex-python==6.3.1a1
sensa-python==6.5.0
scikit-learn==0.23.2
fastapi==0.68.0
uvicorn==0.14.0
2 changes: 1 addition & 1 deletion EndToEndExample/actions/train/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
pandas==1.3.0
numpy==1.22.2
cortex-python==6.3.1a1
sensa-python==6.5.0
scikit-learn==0.23.2
boto3==1.18.19
8 changes: 4 additions & 4 deletions EndToEndExample/actions/train/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
import pandas as pd
from sklearn.model_selection import train_test_split
from cat_encoder import CatEncoder
from cortex import Cortex
from cortex.experiment import Experiment
from sensa import Sensa
from sensa.experiment import Experiment
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn import svm
Expand Down Expand Up @@ -96,7 +96,7 @@ def save_experiment(client, experiment_name, filename, algo, model_id):

# train model using the connection
def train(params):
client = Cortex.from_message(params)
client = Sensa.from_message(params)

payload = params['payload']
# Read connection
Expand Down Expand Up @@ -192,5 +192,5 @@ def train(params):
if len(sys.argv) < 2:
print("Message/payload commandline is required")
exit(1)
# The last argument in sys.argv is the payload from cortex
# The last argument in sys.argv is the payload from sensa
train(json.loads(sys.argv[-1]))
8 changes: 4 additions & 4 deletions EndToEndExample/deploy_skill.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
"""
## steps
1. S3 bucket contains the data(in this case csv file)
2. We save a connection of the given types -> (demonstrated using cortex-python sdk)
2. We save a connection of the given types -> (demonstrated using sensa-python sdk)
5. Dockerize and push an action to train multiple models and save in expermients (use the connection in step 2 to load the data) -> job
3. Dockerize and push another action to predict using the exprinment name(to specify a particular model) -> daemon
4. Save a skill definition, and route inputs to two different actions -> (demonstrated using cortex-python sdk)
4. Save a skill definition, and route inputs to two different actions -> (demonstrated using sensa-python sdk)
5. Test the skills using cli
"""

Expand All @@ -22,7 +22,7 @@

from config import PROJECT_ID, API_ENDPOINT, CORTEX_TOKEN, CONN_PARAMS

from cortex import Cortex
from sensa import Sensa


params = {
Expand All @@ -32,7 +32,7 @@
}

if __name__ == "__main__":
client = Cortex.client()
client = Sensa.client()
conn_params = {}
with open("conn.json") as f:
conn_params = json.load(f)
Expand Down
2 changes: 1 addition & 1 deletion EndToEndExample/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
cortex-python==6.3.1a1
sensa-python==6.5.0
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@
"source": [
"## steps\n",
"1. S3 bucket contains the data(in this case csv file) \n",
"2. We save a connection of the given types -> (demonstrated using cortex-python sdk)\n",
"2. We save a connection of the given types -> (demonstrated using sensa-python sdk)\n",
"5. Dockerize and push an action to train multiple models and save in expermients (use the connection in step 2 to load the data) -> job\n",
"3. Dockerize and push another action to predict using the exprinment name(to specify a particular model) -> daemon\n",
"4. Save a skill definition, and route inputs to two different actions -> (demonstrated using cortex-python sdk)\n",
"4. Save a skill definition, and route inputs to two different actions -> (demonstrated using sensa-python sdk)\n",
"5. Test the skills using cli"
]
},
Expand All @@ -37,7 +37,7 @@
"outputs": [],
"source": [
"# install dependencies\n",
"!pip install cortex-python==6.3.1a1"
"!pip install sensa-python==6.5.0"
]
},
{
Expand All @@ -52,7 +52,7 @@
"import json\n",
"from pprint import pprint\n",
"\n",
"from cortex import Cortex"
"from sensa import Sensa"
]
},
{
Expand Down Expand Up @@ -96,7 +96,7 @@
"metadata": {},
"outputs": [],
"source": [
"client = Cortex.client(api_endpoint=params['apiEndpoint'], project=params['projectId'], token=params['token'])"
"client = Sensa.client(api_endpoint=params['apiEndpoint'], project=params['projectId'], token=params['token'])"
]
},
{
Expand Down Expand Up @@ -278,7 +278,7 @@
" 'title': 'Daemon Port',\n",
" 'type': 'string',\n",
" 'validValues': []}],\n",
" 'title': 'End to End example with cortex-python v6 SDK'}\n"
" 'title': 'End to End example with sensa-python v6 SDK'}\n"
]
}
],
Expand Down Expand Up @@ -457,7 +457,7 @@
" 'type': 'string',\n",
" 'validValues': []}],\n",
" 'tags': [],\n",
" 'title': 'End to End example with cortex-python v6 SDK'}\n"
" 'title': 'End to End example with sensa-python v6 SDK'}\n"
]
}
],
Expand Down Expand Up @@ -501,7 +501,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "cortex-python",
"display_name": "sensa-python",
"language": "python",
"name": "python3"
},
Expand Down
2 changes: 1 addition & 1 deletion EndToEndExample/skill.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "e2e-example",
"camel": "1.0.0",
"title": "End to End example with cortex-python v6 SDK",
"title": "End to End example with sensa-python v6 SDK",
"description": "Training multiple models and using them to predict",
"inputs": [
{
Expand Down
6 changes: 3 additions & 3 deletions ExperimentsExample/predict/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
Copyright (c) 2021. Cognitive Scale Inc. All rights reserved.
"""

from cortex import Cortex
from cortex.experiment import Experiment
from sensa import Sensa
from sensa.experiment import Experiment
from fastapi import FastAPI
from cat_encoder import CatEncoder

Expand All @@ -26,7 +26,7 @@ def run(req: dict):
run_id = payload["run_id"]

# if model is not loaded
client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"])
client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"])
model_ctx[exp_name] = init_model(exp_name, run_id, client)

# retrieve model from the context
Expand Down
2 changes: 1 addition & 1 deletion ExperimentsExample/predict/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
pandas
numpy
cortex-python==6.3.1
sensa-python==6.5.0
scikit-learn==0.24.2
fastapi
uvicorn
2 changes: 1 addition & 1 deletion ExperimentsExample/train/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
pandas
cortex-python==6.3.1
sensa-python==6.5.0
scikit-learn==0.24.2
numpy
boto3
8 changes: 4 additions & 4 deletions ExperimentsExample/train/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
import pandas as pd
from sklearn.model_selection import train_test_split
from cat_encoder import CatEncoder
from cortex import Cortex
from cortex.experiment import Experiment
from sensa import Sensa
from sensa.experiment import Experiment
from sklearn.tree import DecisionTreeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn import svm
Expand Down Expand Up @@ -91,7 +91,7 @@ def save_experiment(client, experiment_name, filename, algo, model_id):
def train(params):
project = params['projectId']
# create a Cortex client instance from the job's parameters
# client = Cortex.client(api_endpoint=params['apiEndpoint'], project=project, token=params['token'])
# client = Sensa.client(api_endpoint=params['apiEndpoint'], project=project, token=params['token'])
client = Cortex.from_message(params)

payload = params['payload']
Expand Down Expand Up @@ -181,5 +181,5 @@ def train(params):
if len(sys.argv) < 2:
print("Message/payload commandline is required")
exit(1)
# The last argument in sys.argv is the payload from cortex
# The last argument in sys.argv is the payload from sensa
train(json.loads(sys.argv[-1]))
2 changes: 1 addition & 1 deletion InterventionSkillExample/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
cortex-python
sensa-python
fastapi
uvicorn
2 changes: 1 addition & 1 deletion JobChaining/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The filename uses the `activationId` which is passed to all skills/actions durin
- Python 3.x
- Docker client
- Bash shell ( Power shell TBA )
- Cortex client
- Sensa client
- URL/Credentials for a cortex instance

## Deploying
Expand Down
12 changes: 6 additions & 6 deletions JobChaining/skills/dataconsumer/job.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@

Licensed under CognitiveScale Example Code [License](https://cognitivescale.github.io/cortex-fabric-examples/LICENSE.md)
"""
from cortex import Cortex
from cortex.content import ManagedContentClient
from sensa import Sensa
from sensa.content import ManagedContentClient
import pandas as pd
import sys
import json

def process(params):
# create a Cortex client instance from the job's parameters
client = Cortex.client(api_endpoint=params['apiEndpoint'], token=params['token'])
# create a Sensa client instance from the job's parameters
client = Sensa.client(api_endpoint=params['apiEndpoint'], token=params['token'])
# get he agent payload
payload = params['payload']
# You can print logs to the console these are collected by docker/k8s
Expand All @@ -23,7 +23,7 @@ def process(params):
raise Exception("'datafileKey' is required in the payload")
content_key = payload['datafileKey']
print(f'Fetching datafile from managed content: {content_key}')
# use the `client` instance to use Cortex client libraries
# use the `client` instance to use Sensa client libraries
content_client = ManagedContentClient(client);
# This is streaming the records to Cortex's managed content
# if this was called as part of an agent
Expand All @@ -36,5 +36,5 @@ def process(params):
if len(sys.argv)<2:
print("Message/payload commandline is required")
exit(1)
# The last argument in sys.argv is the payload from cortex
# The last argument in sys.argv is the payload from sensa
process(json.loads(sys.argv[-1]))
2 changes: 1 addition & 1 deletion JobChaining/skills/dataconsumer/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
cortex-python
sensa-python
pandas
Loading