diff --git a/EndToEndExample/Makefile b/EndToEndExample/Makefile index 79afbffa..a4550f8f 100644 --- a/EndToEndExample/Makefile +++ b/EndToEndExample/Makefile @@ -33,5 +33,5 @@ ifndef DOCKER_PREGISTRY_URL $(error environment variable DOCKER_PREGISTRY_URL is not set. Set it like /) endif ifndef PROJECT_NAME - $(error environment variable PROJECT_NAME is not set. Set this to Cortex project name.) + $(error environment variable PROJECT_NAME is not set. Set this to Sensa project name.) endif \ No newline at end of file diff --git a/EndToEndExample/README.md b/EndToEndExample/README.md index b3c1a423..6dfb3133 100644 --- a/EndToEndExample/README.md +++ b/EndToEndExample/README.md @@ -1,12 +1,12 @@ -### Cortex End to End Example Demonstrating cortex-python SDK (Train and Predict Skill) +### Sensa End to End Example Demonstrating sensa-python SDK (Train and Predict Skill) ### Prerequisites - Python 3.x - Docker client - Bash shell ( Power shell TBA ) -- Cortex client ( installed ) +- Sensa client ( installed ) - URL/Credentials for a cortex instance -- Use `make install` to install the cortex-python SDK +- Use `make install` to install the sensa-python SDK #### Connections and Data Set For simplicity we selected german credit dataset(find it in [data](data)) and training on this dataset. @@ -20,17 +20,17 @@ There are certain configs and secrets like AWS Public key and Private Key we nee Example: If you have a secret key called `awssecret` set in your project you can use it to create connections by Using `#SECURE.awssecret` as the value to one of the parameters in the Connections Object. Similary any secret key set such as `` can be used as `#SECURE.`. This needs to be updated in [config.py](config.py) #### Train Action: -Cortex action to train and save model in Experiment. -This is implemented as a background job (Cortex Action of type Job) that trains a model on specified connection. +Sensa action to train and save model in Experiment. +This is implemented as a background job (Sensa Action of type Job) that trains a model on specified connection. It uses the connection saved in the above step -In the end trained model is saved with metadata in Cortex Experiments to fetch and run predictions later. +In the end trained model is saved with metadata in Sensa Experiments to fetch and run predictions later. Every time we run this Job to train model, the model will be versioned and stored with its metadata. Later this metadata and version numbers will be used to select required model. #### Predict Action: -This Action demonstrates fetching saved model from Cortex Experiment and running predictions on it. -For simplicity, this skill is loading the latest trained model in the Cortex Experiment(we can specify the `run_id` as well). +This Action demonstrates fetching saved model from Sensa Experiment and running predictions on it. +For simplicity, this skill is loading the latest trained model in the Sensa Experiment(we can specify the `run_id` as well). In production environment, users can select a model at specific version or based on metadata saved. This will load and cache the model in memory for subsequent prediction invocation. @@ -42,28 +42,28 @@ you may change to another framework or language. * `skill.json` Skill definition and action mapping * `conn.json` Connection definition * `config.py` Configuration file to update connection and project configurations and secrets -* `actions/train/train.py` Model train code to be run as Cortex Job +* `actions/train/train.py` Model train code to be run as Sensa Job * `actions/train/requirements.txt` Python3 libraries dependencies * `actions/train/Dockerfile` to build Docker image for train action * `Makefile` Makefile to build and push Train and Predict Action Docker images to the specified `DOCKER_PREGISTRY_URL` -* `actions/predict/main.py` Predict code to be run as Cortex Daemon +* `actions/predict/main.py` Predict code to be run as Sensa Daemon * `actions/predict/requirements.txt` Python3 libraries dependencies * `actions/predict/Dockerfile` to build Docker image for predict action -* `deploy_skill.py` Uses cortex-python SDK to deploy skill and actions +* `deploy_skill.py` Uses sensa-python SDK to deploy skill and actions * `tests/test_train.json` Sample Payload for Train Skill * `tests/test_predict.json` Sample Payload for Predict Skill #### Steps -A Makefile is provided to do these steps. Set environment variables `DOCKER_PREGISTRY_URL` (like `/`) and `PROJECT_NAME` (Cortex Project Name) and use Makefile to build and push docker images of Train and Predict Actions. +A Makefile is provided to do these steps. Set environment variables `DOCKER_PREGISTRY_URL` (like `/`) and `PROJECT_NAME` (Sensa Project Name) and use Makefile to build and push docker images of Train and Predict Actions. `make deploy` will build & push Docker images for Train and Predict Actions and then deploy the skill(wrapping the actions). -Alternatively running the cells in [cortex-python_v6_example.ipynb](cortex-python_v6_example.ipynb) +Alternatively running the cells in [sensa-python_v6_example.ipynb](sensa-python_v6_example.ipynb) will also deploy the skills and the actions. In order to modify the actions follow the steps below: -1. Set environment variables `DOCKER_PREGISTRY_URL` (like `/`) and `PROJECT_NAME` (Cortex Project Name) +1. Set environment variables `DOCKER_PREGISTRY_URL` (like `/`) and `PROJECT_NAME` (Sensa Project Name) 2. Start by modifying the `conn.json` file updating the connection definition 3. Make sure the secrets such as AWS_PUBLIC_KEY, S3_BUCKET, FILE_NAME, API_ENDPOINT and CORTEX_TOKEN are updated in the `config.py` file 4. Modify the main executable (`main.py` by default) run by the action image's entrypoint/command to handle the action's custom logic. @@ -189,4 +189,4 @@ And the Predict action output will look something like this -For more details about how to build skills go to [Cortex Fabric Documentation - Development - Develop Skills](https://cognitivescale.github.io/cortex-fabric/docs/build-skills/define-skills) +For more details about how to build skills go to [Sensa Fabric Documentation - Development - Develop Skills](https://cognitivescale.github.io/cortex-fabric/docs/build-skills/define-skills) diff --git a/EndToEndExample/actions/predict/main.py b/EndToEndExample/actions/predict/main.py index fb0ad022..54c8e2bf 100644 --- a/EndToEndExample/actions/predict/main.py +++ b/EndToEndExample/actions/predict/main.py @@ -2,8 +2,8 @@ Copyright (c) 2021. Cognitive Scale Inc. All rights reserved. """ -from cortex import Cortex -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.experiment import Experiment from fastapi import FastAPI import numpy as np @@ -25,7 +25,7 @@ def run(req: dict): run_id = payload["run_id"] # if model is not loaded - client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) + client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) model_ctx[exp_name] = init_model(exp_name, run_id, client) diff --git a/EndToEndExample/actions/predict/requirements.txt b/EndToEndExample/actions/predict/requirements.txt index d4cec944..6564d510 100644 --- a/EndToEndExample/actions/predict/requirements.txt +++ b/EndToEndExample/actions/predict/requirements.txt @@ -1,6 +1,6 @@ pandas==1.3.0 numpy==1.22.2 -cortex-python==6.3.1a1 +sensa-python==6.5.0 scikit-learn==0.23.2 fastapi==0.68.0 uvicorn==0.14.0 \ No newline at end of file diff --git a/EndToEndExample/actions/train/requirements.txt b/EndToEndExample/actions/train/requirements.txt index c1eed11a..783950f4 100644 --- a/EndToEndExample/actions/train/requirements.txt +++ b/EndToEndExample/actions/train/requirements.txt @@ -1,5 +1,5 @@ pandas==1.3.0 numpy==1.22.2 -cortex-python==6.3.1a1 +sensa-python==6.5.0 scikit-learn==0.23.2 boto3==1.18.19 \ No newline at end of file diff --git a/EndToEndExample/actions/train/train.py b/EndToEndExample/actions/train/train.py index a5bd4bda..ab57a16a 100644 --- a/EndToEndExample/actions/train/train.py +++ b/EndToEndExample/actions/train/train.py @@ -13,8 +13,8 @@ import pandas as pd from sklearn.model_selection import train_test_split from cat_encoder import CatEncoder -from cortex import Cortex -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.experiment import Experiment from sklearn.tree import DecisionTreeClassifier from sklearn.neural_network import MLPClassifier from sklearn import svm @@ -96,7 +96,7 @@ def save_experiment(client, experiment_name, filename, algo, model_id): # train model using the connection def train(params): - client = Cortex.from_message(params) + client = Sensa.from_message(params) payload = params['payload'] # Read connection @@ -192,5 +192,5 @@ def train(params): if len(sys.argv) < 2: print("Message/payload commandline is required") exit(1) - # The last argument in sys.argv is the payload from cortex + # The last argument in sys.argv is the payload from sensa train(json.loads(sys.argv[-1])) diff --git a/EndToEndExample/deploy_skill.py b/EndToEndExample/deploy_skill.py index 9670e709..6b69b85f 100644 --- a/EndToEndExample/deploy_skill.py +++ b/EndToEndExample/deploy_skill.py @@ -4,10 +4,10 @@ """ ## steps 1. S3 bucket contains the data(in this case csv file) -2. We save a connection of the given types -> (demonstrated using cortex-python sdk) +2. We save a connection of the given types -> (demonstrated using sensa-python sdk) 5. Dockerize and push an action to train multiple models and save in expermients (use the connection in step 2 to load the data) -> job 3. Dockerize and push another action to predict using the exprinment name(to specify a particular model) -> daemon -4. Save a skill definition, and route inputs to two different actions -> (demonstrated using cortex-python sdk) +4. Save a skill definition, and route inputs to two different actions -> (demonstrated using sensa-python sdk) 5. Test the skills using cli """ @@ -22,7 +22,7 @@ from config import PROJECT_ID, API_ENDPOINT, CORTEX_TOKEN, CONN_PARAMS -from cortex import Cortex +from sensa import Sensa params = { @@ -32,7 +32,7 @@ } if __name__ == "__main__": - client = Cortex.client() + client = Sensa.client() conn_params = {} with open("conn.json") as f: conn_params = json.load(f) diff --git a/EndToEndExample/requirements.txt b/EndToEndExample/requirements.txt index c19d63ae..fbf5443e 100644 --- a/EndToEndExample/requirements.txt +++ b/EndToEndExample/requirements.txt @@ -1 +1 @@ -cortex-python==6.3.1a1 \ No newline at end of file +sensa-python==6.5.0 \ No newline at end of file diff --git a/EndToEndExample/cortex-python_v6_example.ipynb b/EndToEndExample/sensa-python_v6_example.ipynb similarity index 97% rename from EndToEndExample/cortex-python_v6_example.ipynb rename to EndToEndExample/sensa-python_v6_example.ipynb index 74a278b6..4c9e3702 100644 --- a/EndToEndExample/cortex-python_v6_example.ipynb +++ b/EndToEndExample/sensa-python_v6_example.ipynb @@ -7,10 +7,10 @@ "source": [ "## steps\n", "1. S3 bucket contains the data(in this case csv file) \n", - "2. We save a connection of the given types -> (demonstrated using cortex-python sdk)\n", + "2. We save a connection of the given types -> (demonstrated using sensa-python sdk)\n", "5. Dockerize and push an action to train multiple models and save in expermients (use the connection in step 2 to load the data) -> job\n", "3. Dockerize and push another action to predict using the exprinment name(to specify a particular model) -> daemon\n", - "4. Save a skill definition, and route inputs to two different actions -> (demonstrated using cortex-python sdk)\n", + "4. Save a skill definition, and route inputs to two different actions -> (demonstrated using sensa-python sdk)\n", "5. Test the skills using cli" ] }, @@ -37,7 +37,7 @@ "outputs": [], "source": [ "# install dependencies\n", - "!pip install cortex-python==6.3.1a1" + "!pip install sensa-python==6.5.0" ] }, { @@ -52,7 +52,7 @@ "import json\n", "from pprint import pprint\n", "\n", - "from cortex import Cortex" + "from sensa import Sensa" ] }, { @@ -96,7 +96,7 @@ "metadata": {}, "outputs": [], "source": [ - "client = Cortex.client(api_endpoint=params['apiEndpoint'], project=params['projectId'], token=params['token'])" + "client = Sensa.client(api_endpoint=params['apiEndpoint'], project=params['projectId'], token=params['token'])" ] }, { @@ -278,7 +278,7 @@ " 'title': 'Daemon Port',\n", " 'type': 'string',\n", " 'validValues': []}],\n", - " 'title': 'End to End example with cortex-python v6 SDK'}\n" + " 'title': 'End to End example with sensa-python v6 SDK'}\n" ] } ], @@ -457,7 +457,7 @@ " 'type': 'string',\n", " 'validValues': []}],\n", " 'tags': [],\n", - " 'title': 'End to End example with cortex-python v6 SDK'}\n" + " 'title': 'End to End example with sensa-python v6 SDK'}\n" ] } ], @@ -501,7 +501,7 @@ ], "metadata": { "kernelspec": { - "display_name": "cortex-python", + "display_name": "sensa-python", "language": "python", "name": "python3" }, diff --git a/EndToEndExample/skill.json b/EndToEndExample/skill.json index df799cd1..91156777 100644 --- a/EndToEndExample/skill.json +++ b/EndToEndExample/skill.json @@ -1,7 +1,7 @@ { "name": "e2e-example", "camel": "1.0.0", - "title": "End to End example with cortex-python v6 SDK", + "title": "End to End example with sensa-python v6 SDK", "description": "Training multiple models and using them to predict", "inputs": [ { diff --git a/ExperimentsExample/predict/main.py b/ExperimentsExample/predict/main.py index 99a976aa..23b2f9c0 100644 --- a/ExperimentsExample/predict/main.py +++ b/ExperimentsExample/predict/main.py @@ -2,8 +2,8 @@ Copyright (c) 2021. Cognitive Scale Inc. All rights reserved. """ -from cortex import Cortex -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.experiment import Experiment from fastapi import FastAPI from cat_encoder import CatEncoder @@ -26,7 +26,7 @@ def run(req: dict): run_id = payload["run_id"] # if model is not loaded - client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) + client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) model_ctx[exp_name] = init_model(exp_name, run_id, client) # retrieve model from the context diff --git a/ExperimentsExample/predict/requirements.txt b/ExperimentsExample/predict/requirements.txt index 6cb1841a..b643a7b6 100644 --- a/ExperimentsExample/predict/requirements.txt +++ b/ExperimentsExample/predict/requirements.txt @@ -1,6 +1,6 @@ pandas numpy -cortex-python==6.3.1 +sensa-python==6.5.0 scikit-learn==0.24.2 fastapi uvicorn \ No newline at end of file diff --git a/ExperimentsExample/train/requirements.txt b/ExperimentsExample/train/requirements.txt index 0a30b823..eda2104e 100644 --- a/ExperimentsExample/train/requirements.txt +++ b/ExperimentsExample/train/requirements.txt @@ -1,5 +1,5 @@ pandas -cortex-python==6.3.1 +sensa-python==6.5.0 scikit-learn==0.24.2 numpy boto3 diff --git a/ExperimentsExample/train/train.py b/ExperimentsExample/train/train.py index c511d183..4c60cff1 100644 --- a/ExperimentsExample/train/train.py +++ b/ExperimentsExample/train/train.py @@ -13,8 +13,8 @@ import pandas as pd from sklearn.model_selection import train_test_split from cat_encoder import CatEncoder -from cortex import Cortex -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.experiment import Experiment from sklearn.tree import DecisionTreeClassifier from sklearn.neural_network import MLPClassifier from sklearn import svm @@ -91,7 +91,7 @@ def save_experiment(client, experiment_name, filename, algo, model_id): def train(params): project = params['projectId'] # create a Cortex client instance from the job's parameters - # client = Cortex.client(api_endpoint=params['apiEndpoint'], project=project, token=params['token']) + # client = Sensa.client(api_endpoint=params['apiEndpoint'], project=project, token=params['token']) client = Cortex.from_message(params) payload = params['payload'] @@ -181,5 +181,5 @@ def train(params): if len(sys.argv) < 2: print("Message/payload commandline is required") exit(1) - # The last argument in sys.argv is the payload from cortex + # The last argument in sys.argv is the payload from sensa train(json.loads(sys.argv[-1])) diff --git a/InterventionSkillExample/requirements.txt b/InterventionSkillExample/requirements.txt index 06592a66..9ed7b7d1 100644 --- a/InterventionSkillExample/requirements.txt +++ b/InterventionSkillExample/requirements.txt @@ -1,3 +1,3 @@ -cortex-python +sensa-python fastapi uvicorn diff --git a/JobChaining/README.md b/JobChaining/README.md index d790ae50..b9c2f286 100644 --- a/JobChaining/README.md +++ b/JobChaining/README.md @@ -8,7 +8,7 @@ The filename uses the `activationId` which is passed to all skills/actions durin - Python 3.x - Docker client - Bash shell ( Power shell TBA ) -- Cortex client +- Sensa client - URL/Credentials for a cortex instance ## Deploying diff --git a/JobChaining/skills/dataconsumer/job.py b/JobChaining/skills/dataconsumer/job.py index 078ac0ab..af12a830 100644 --- a/JobChaining/skills/dataconsumer/job.py +++ b/JobChaining/skills/dataconsumer/job.py @@ -3,15 +3,15 @@ Licensed under CognitiveScale Example Code [License](https://cognitivescale.github.io/cortex-fabric-examples/LICENSE.md) """ -from cortex import Cortex -from cortex.content import ManagedContentClient +from sensa import Sensa +from sensa.content import ManagedContentClient import pandas as pd import sys import json def process(params): - # create a Cortex client instance from the job's parameters - client = Cortex.client(api_endpoint=params['apiEndpoint'], token=params['token']) + # create a Sensa client instance from the job's parameters + client = Sensa.client(api_endpoint=params['apiEndpoint'], token=params['token']) # get he agent payload payload = params['payload'] # You can print logs to the console these are collected by docker/k8s @@ -23,7 +23,7 @@ def process(params): raise Exception("'datafileKey' is required in the payload") content_key = payload['datafileKey'] print(f'Fetching datafile from managed content: {content_key}') - # use the `client` instance to use Cortex client libraries + # use the `client` instance to use Sensa client libraries content_client = ManagedContentClient(client); # This is streaming the records to Cortex's managed content # if this was called as part of an agent @@ -36,5 +36,5 @@ def process(params): if len(sys.argv)<2: print("Message/payload commandline is required") exit(1) - # The last argument in sys.argv is the payload from cortex + # The last argument in sys.argv is the payload from sensa process(json.loads(sys.argv[-1])) diff --git a/JobChaining/skills/dataconsumer/requirements.txt b/JobChaining/skills/dataconsumer/requirements.txt index 7b01a942..ff17078b 100644 --- a/JobChaining/skills/dataconsumer/requirements.txt +++ b/JobChaining/skills/dataconsumer/requirements.txt @@ -1,2 +1,2 @@ -cortex-python +sensa-python pandas \ No newline at end of file diff --git a/JobChaining/skills/datagenerator/job.py b/JobChaining/skills/datagenerator/job.py index 022066ba..96e53c30 100644 --- a/JobChaining/skills/datagenerator/job.py +++ b/JobChaining/skills/datagenerator/job.py @@ -3,8 +3,8 @@ Licensed under CognitiveScale Example Code [License](https://cognitivescale.github.io/cortex-fabric-examples/LICENSE.md) """ -from cortex import Cortex -from cortex.content import ManagedContentClient +from sensa import Sensa +from sensa.content import ManagedContentClient import json import sys import time @@ -21,13 +21,13 @@ def datagen_stream(count): print('DONE GENERATING DATA') def process(params): - # create a Cortex client instance from the job's parameters - client = Cortex.client(api_endpoint=params['apiEndpoint'], token=params['token']) + # create a Sensa client instance from the job's parameters + client = Sensa.client(api_endpoint=params['apiEndpoint'], token=params['token']) # get the agent payload payload = params.get('payload',{}) # You can print logs to the console these are collected by docker/k8s print(f'Got payload: {payload}') - # use the `client` instance to use Cortex client libraries + # use the `client` instance to use Sensa client libraries content_client = ManagedContentClient(client); if 'activationId' in params: file_name = f'jobchain-data-{params["activationId"]}' @@ -44,5 +44,5 @@ def process(params): if len(sys.argv)<2: print("Message/payload commandline is required") exit(1) - # The last argument in argv[] is the payload from cortex + # The last argument in argv[] is the payload from sensa process(json.loads(sys.argv[-1])) diff --git a/JobChaining/skills/datagenerator/requirements.txt b/JobChaining/skills/datagenerator/requirements.txt index 6dffeb2c..4de6b02e 100644 --- a/JobChaining/skills/datagenerator/requirements.txt +++ b/JobChaining/skills/datagenerator/requirements.txt @@ -1,2 +1,2 @@ -cortex-python +sensa-python Faker \ No newline at end of file diff --git a/ai-missions/high-risk-flu-shot-mission/skills/send_engage_sydneycare_chatbot_notification/requirements.txt b/ai-missions/high-risk-flu-shot-mission/skills/send_engage_sydneycare_chatbot_notification/requirements.txt index f8bc5023..2ca65bc8 100644 --- a/ai-missions/high-risk-flu-shot-mission/skills/send_engage_sydneycare_chatbot_notification/requirements.txt +++ b/ai-missions/high-risk-flu-shot-mission/skills/send_engage_sydneycare_chatbot_notification/requirements.txt @@ -1,4 +1,4 @@ pandas==1.4.1 -cortex-python +sensa-python uvicorn==0.12.1 fastapi==0.65.2 \ No newline at end of file diff --git a/ai-missions/high-risk-flu-shot-mission/skills/send_nurseline_notification/requirements.txt b/ai-missions/high-risk-flu-shot-mission/skills/send_nurseline_notification/requirements.txt index f8bc5023..2ca65bc8 100644 --- a/ai-missions/high-risk-flu-shot-mission/skills/send_nurseline_notification/requirements.txt +++ b/ai-missions/high-risk-flu-shot-mission/skills/send_nurseline_notification/requirements.txt @@ -1,4 +1,4 @@ pandas==1.4.1 -cortex-python +sensa-python uvicorn==0.12.1 fastapi==0.65.2 \ No newline at end of file diff --git a/ai-missions/high-risk-flu-shot-mission/skills/send_schedule_appointment_notification/requirements.txt b/ai-missions/high-risk-flu-shot-mission/skills/send_schedule_appointment_notification/requirements.txt index f8bc5023..2ca65bc8 100644 --- a/ai-missions/high-risk-flu-shot-mission/skills/send_schedule_appointment_notification/requirements.txt +++ b/ai-missions/high-risk-flu-shot-mission/skills/send_schedule_appointment_notification/requirements.txt @@ -1,4 +1,4 @@ pandas==1.4.1 -cortex-python +sensa-python uvicorn==0.12.1 fastapi==0.65.2 \ No newline at end of file diff --git a/ai-missions/high-risk-flu-shot-mission/skills/send_sydneycare_notification/requirements.txt b/ai-missions/high-risk-flu-shot-mission/skills/send_sydneycare_notification/requirements.txt index f8bc5023..2ca65bc8 100644 --- a/ai-missions/high-risk-flu-shot-mission/skills/send_sydneycare_notification/requirements.txt +++ b/ai-missions/high-risk-flu-shot-mission/skills/send_sydneycare_notification/requirements.txt @@ -1,4 +1,4 @@ pandas==1.4.1 -cortex-python +sensa-python uvicorn==0.12.1 fastapi==0.65.2 \ No newline at end of file diff --git a/ai-missions/high-risk-flu-shot-mission/skills/send_virtual_care_notification/requirements.txt b/ai-missions/high-risk-flu-shot-mission/skills/send_virtual_care_notification/requirements.txt index f8bc5023..2ca65bc8 100644 --- a/ai-missions/high-risk-flu-shot-mission/skills/send_virtual_care_notification/requirements.txt +++ b/ai-missions/high-risk-flu-shot-mission/skills/send_virtual_care_notification/requirements.txt @@ -1,4 +1,4 @@ pandas==1.4.1 -cortex-python +sensa-python uvicorn==0.12.1 fastapi==0.65.2 \ No newline at end of file diff --git a/ai-missions/high-risk-flu-shot-mission/skills/update_member_phone_number/requirements.txt b/ai-missions/high-risk-flu-shot-mission/skills/update_member_phone_number/requirements.txt index f8bc5023..2ca65bc8 100644 --- a/ai-missions/high-risk-flu-shot-mission/skills/update_member_phone_number/requirements.txt +++ b/ai-missions/high-risk-flu-shot-mission/skills/update_member_phone_number/requirements.txt @@ -1,4 +1,4 @@ pandas==1.4.1 -cortex-python +sensa-python uvicorn==0.12.1 fastapi==0.65.2 \ No newline at end of file diff --git a/batch-prediction/predict/main.py b/batch-prediction/predict/main.py index 7557f899..8214529e 100644 --- a/batch-prediction/predict/main.py +++ b/batch-prediction/predict/main.py @@ -13,8 +13,8 @@ import pymongo # cortex -from cortex import Cortex -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.experiment import Experiment def load_model(client, experiment_name, run_id, artifact_key): @@ -115,8 +115,8 @@ def make_batch_predictions(input_params): batch_size = int(input_params["properties"]["batch-size"]) try: - # Initialize Cortex Client - client = Cortex.from_message(params) + # Initialize Sensa Client + client = Sensa.from_message(params) # Read cortex connection details connection = client.connections.get_connection(input_params["properties"]["connection-name"]) diff --git a/batch-prediction/requirements.txt b/batch-prediction/requirements.txt index f701d312..fd6f8f60 100644 --- a/batch-prediction/requirements.txt +++ b/batch-prediction/requirements.txt @@ -1,5 +1,5 @@ pandas==1.4.1 -cortex-python==6.3.1a1 +cortex-python==6.5.0 catboost==0.25.1 scikit-learn==0.24.0 pymongo==3.10.1 diff --git a/cortex-python-lib-examples/connection-example/skills/connection-daemon/actions/connection-daemon/requirements.txt b/cortex-python-lib-examples/connection-example/skills/connection-daemon/actions/connection-daemon/requirements.txt index fef3f273..2dbc4954 100644 --- a/cortex-python-lib-examples/connection-example/skills/connection-daemon/actions/connection-daemon/requirements.txt +++ b/cortex-python-lib-examples/connection-example/skills/connection-daemon/actions/connection-daemon/requirements.txt @@ -1,4 +1,4 @@ -cortex-python +sensa-python pymongo[srv] logzero fastapi diff --git a/cortex-python-lib-examples/gcs-connection-skill/skills/gcs-connection-reader/requirements.txt b/cortex-python-lib-examples/gcs-connection-skill/skills/gcs-connection-reader/requirements.txt index 7b3d32ea..e7956331 100644 --- a/cortex-python-lib-examples/gcs-connection-skill/skills/gcs-connection-reader/requirements.txt +++ b/cortex-python-lib-examples/gcs-connection-skill/skills/gcs-connection-reader/requirements.txt @@ -1,2 +1,2 @@ -cortex-python +sensa-python google-cloud-storage diff --git a/cortex-python-lib-examples/managed-content-example/managed_content_notebook.ipynb b/cortex-python-lib-examples/managed-content-example/managed_content_notebook.ipynb index 1aa3f83c..3467a282 100644 --- a/cortex-python-lib-examples/managed-content-example/managed_content_notebook.ipynb +++ b/cortex-python-lib-examples/managed-content-example/managed_content_notebook.ipynb @@ -7,7 +7,7 @@ "outputs": [], "source": [ "## Install cortex python library\n", - "!pip install cortex-python==6.3.1a1" + "!pip install sensa-python==6.5.0" ] }, { diff --git a/cortex-python-lib-examples/managed-content-example/skills/m-content-job/actions/m-content-job/requirements.txt b/cortex-python-lib-examples/managed-content-example/skills/m-content-job/actions/m-content-job/requirements.txt index c1819925..fb9c4079 100644 --- a/cortex-python-lib-examples/managed-content-example/skills/m-content-job/actions/m-content-job/requirements.txt +++ b/cortex-python-lib-examples/managed-content-example/skills/m-content-job/actions/m-content-job/requirements.txt @@ -1 +1 @@ -cortex-python +sensa-python diff --git a/cortex-python-lib-examples/simple-experiment/skills/simple-exp-predict/actions/simple-exp-predict/requirements.txt b/cortex-python-lib-examples/simple-experiment/skills/simple-exp-predict/actions/simple-exp-predict/requirements.txt index 8d08124f..38504885 100644 --- a/cortex-python-lib-examples/simple-experiment/skills/simple-exp-predict/actions/simple-exp-predict/requirements.txt +++ b/cortex-python-lib-examples/simple-experiment/skills/simple-exp-predict/actions/simple-exp-predict/requirements.txt @@ -1,4 +1,4 @@ -cortex-python +sensa-python fastapi uvicorn scikit-learn diff --git a/cortex-python-lib-examples/simple-experiment/skills/simple-exp-train/actions/simple-exp-train/requirements.txt b/cortex-python-lib-examples/simple-experiment/skills/simple-exp-train/actions/simple-exp-train/requirements.txt index f9cc040e..b9ae054a 100644 --- a/cortex-python-lib-examples/simple-experiment/skills/simple-exp-train/actions/simple-exp-train/requirements.txt +++ b/cortex-python-lib-examples/simple-experiment/skills/simple-exp-train/actions/simple-exp-train/requirements.txt @@ -1,2 +1,2 @@ -cortex-python +sensa-python scikit-learn diff --git a/data-generator/__main__.py b/data-generator/__main__.py index 97595dcd..2194832d 100644 --- a/data-generator/__main__.py +++ b/data-generator/__main__.py @@ -1,10 +1,7 @@ import json -import sys -#from cortex import Cortex,Message from ds_discovery import Controller import os import warnings -import requests warnings.simplefilter(action='ignore', category=FutureWarning) warnings.simplefilter(action='ignore', category=DeprecationWarning) diff --git a/data-generator/requirements.txt b/data-generator/requirements.txt index a705d2d2..4f109c29 100644 --- a/data-generator/requirements.txt +++ b/data-generator/requirements.txt @@ -4,8 +4,8 @@ discovery-transition-ds == 3.02.074 kubernetes pyarrow == 4.0.1 fastparquet -# Cortex -cortex-python +# Sensa +sensa-python # Data Science packages pandas >= 1.1 numpy diff --git a/online-prediction/predict/model_flow.py b/online-prediction/predict/model_flow.py index 5163c6e8..0f84064e 100644 --- a/online-prediction/predict/model_flow.py +++ b/online-prediction/predict/model_flow.py @@ -8,8 +8,8 @@ import numpy as np import pandas as pd -from cortex import Cortex -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.experiment import Experiment from predict.cat_encoder import CatEncoder from predict.request_models import InvokeRequest @@ -23,7 +23,7 @@ def load_model(api_endpoint: str, token: str, project_id: str, experiment_name: raise ValueError("experiment-name is required if a model is not initialized") # Initialize Cortex Client - client = Cortex.client(api_endpoint=api_endpoint, token=token, project=project_id) + client = Sensa.client(api_endpoint=api_endpoint, token=token, project=project_id) # Load Model from the experiment run logging.info("Loading model artifacts from experiment run...") diff --git a/online-prediction/requirements.txt b/online-prediction/requirements.txt index 15b06779..2e8b8424 100644 --- a/online-prediction/requirements.txt +++ b/online-prediction/requirements.txt @@ -1,5 +1,5 @@ pandas==1.4.1 -cortex-python==6.3.1 +sensa-python==6.5.0 fastapi==0.66.0 uvicorn==0.14.0 catboost==0.25.1 diff --git a/profile-data/requirements.txt b/profile-data/requirements.txt index f9b2676b..6cfda93a 100644 --- a/profile-data/requirements.txt +++ b/profile-data/requirements.txt @@ -1,4 +1,4 @@ -cortex-python==6.1.0 +sensa-python==6.5.0 discovery-transition-ds cortex-python-profiles==1.3.0a1 fastparquet==0.7.0 diff --git a/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-1/requirements.txt b/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-1/requirements.txt index 9080cf84..bebfdf33 100644 --- a/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-1/requirements.txt +++ b/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-1/requirements.txt @@ -1,3 +1,3 @@ -cortex-python==6.3.1a1 +sensa-python==6.3.1a1 fastapi==0.66.0 uvicorn==0.14.0 diff --git a/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-2/requirements.txt b/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-2/requirements.txt index d419c94c..640ac48e 100644 --- a/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-2/requirements.txt +++ b/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-2/requirements.txt @@ -1,3 +1,3 @@ -cortex-python==6.3.1a1 +sensa-python==6.3.1a1 fastapi==0.66.0 uvicorn==0.14.0 \ No newline at end of file diff --git a/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-3/requirements.txt b/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-3/requirements.txt index d419c94c..640ac48e 100644 --- a/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-3/requirements.txt +++ b/propagate-headers-agent-to-skill/skills/chained-daemon-skills/skill-3/requirements.txt @@ -1,3 +1,3 @@ -cortex-python==6.3.1a1 +sensa-python==6.3.1a1 fastapi==0.66.0 uvicorn==0.14.0 \ No newline at end of file diff --git a/propagate-headers-agent-to-skill/skills/header-daemon/requirements.txt b/propagate-headers-agent-to-skill/skills/header-daemon/requirements.txt index 06592a66..9ed7b7d1 100644 --- a/propagate-headers-agent-to-skill/skills/header-daemon/requirements.txt +++ b/propagate-headers-agent-to-skill/skills/header-daemon/requirements.txt @@ -1,3 +1,3 @@ -cortex-python +sensa-python fastapi uvicorn diff --git a/propagate-headers-agent-to-skill/skills/header-job/requirements.txt b/propagate-headers-agent-to-skill/skills/header-job/requirements.txt index c1819925..fb9c4079 100644 --- a/propagate-headers-agent-to-skill/skills/header-job/requirements.txt +++ b/propagate-headers-agent-to-skill/skills/header-job/requirements.txt @@ -1 +1 @@ -cortex-python +sensa-python diff --git a/sessions-example/main.py b/sessions-example/main.py index 44c4e15a..502f7b04 100644 --- a/sessions-example/main.py +++ b/sessions-example/main.py @@ -1,6 +1,6 @@ from fastapi import FastAPI -from cortex import Cortex +from sensa import Sensa app = FastAPI() @@ -8,7 +8,7 @@ @app.post('/start') def start(req: dict): payload = req['payload'] - client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) + client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) ttl = None description = "No description given" if "ttl" in payload: @@ -22,7 +22,7 @@ def start(req: dict): @app.post('/get') def get(req: dict): payload = req['payload'] - client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) + client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) session_id = None if "session_id" in payload: @@ -41,7 +41,7 @@ def get(req: dict): @app.post('/put') def put(req: dict): payload = req['payload'] - client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) + client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) session_id = None if "session_id" in payload: @@ -60,7 +60,7 @@ def put(req: dict): @app.post('/delete') def delete(req: dict): payload = req['payload'] - client = Cortex.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) + client = Sensa.client(api_endpoint=req["apiEndpoint"], project=req["projectId"], token=req["token"]) session_id = None if "session_id" in payload: diff --git a/sessions-example/requirements.txt b/sessions-example/requirements.txt index 05a61246..b048f1fb 100644 --- a/sessions-example/requirements.txt +++ b/sessions-example/requirements.txt @@ -1,3 +1,3 @@ -cortex-python==6.3.1a1 +sensa-python==6.5.0 fastapi uvicorn diff --git a/spark-batch-prediction/src/main/python/main.py b/spark-batch-prediction/src/main/python/main.py index a234cba3..ec61d6d7 100644 --- a/spark-batch-prediction/src/main/python/main.py +++ b/spark-batch-prediction/src/main/python/main.py @@ -4,9 +4,9 @@ Licensed under CognitiveScale Example Code [License](https://cognitivescale.github.io/cortex-fabric-examples/LICENSE.md) """ import sys -from cortex import Cortex -from cortex.utils import log_message, get_logger -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.utils import log_message, get_logger +from sensa.experiment import Experiment import json import numpy as np import logging @@ -93,7 +93,7 @@ def make_batch_predictions(input_params): outcome = input_params["properties"]["outcome"] # Initialize Cortex Client - client = Cortex.client(api_endpoint=url, token=token, project=project) + client = Sensa.client(api_endpoint=url, token=token, project=project) # Read cortex connection details connection = client.connections.get_connection(input_params["properties"]["connection-name"]) diff --git a/spark-batch-prediction/src/main/python/requirements.txt b/spark-batch-prediction/src/main/python/requirements.txt index 4accf203..1a97fb2e 100644 --- a/spark-batch-prediction/src/main/python/requirements.txt +++ b/spark-batch-prediction/src/main/python/requirements.txt @@ -1,4 +1,4 @@ pandas==1.2.4 -cortex-python==6.3.1a1 +sensa-python==6.5.0 catboost==0.25.1 scikit-learn==0.24.0 diff --git a/spark-batch-prediction/submit_job.py b/spark-batch-prediction/submit_job.py index f9c31e05..8a357565 100644 --- a/spark-batch-prediction/submit_job.py +++ b/spark-batch-prediction/submit_job.py @@ -8,9 +8,9 @@ import json import subprocess import logging -from cortex import Cortex -from cortex.utils import log_message, get_logger -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.utils import log_message, get_logger +from sensa.experiment import Experiment def get_runtime_args(config): @@ -38,7 +38,7 @@ def get_runtime_args(config): skill_name = input_params["skillName"] experiment_name = input_params["properties"]["experiment-name"] run_id = input_params["properties"]["run-id"] - client = Cortex.client(api_endpoint=url, token=token, project=project) + client = Sensa.client(api_endpoint=url, token=token, project=project) result = client.experiments.get_experiment(experiment_name) experiment = Experiment(result, client.experiments) run = experiment.get_run(run_id) diff --git a/spark-emr-batch-prediction/emr-container-image/src/job.py b/spark-emr-batch-prediction/emr-container-image/src/job.py index 54cc66b5..398bcd75 100644 --- a/spark-emr-batch-prediction/emr-container-image/src/job.py +++ b/spark-emr-batch-prediction/emr-container-image/src/job.py @@ -4,9 +4,9 @@ Licensed under CognitiveScale Example Code [License](https://cognitivescale.github.io/cortex-fabric-examples/LICENSE.md) """ import sys -from cortex import Cortex -from cortex.utils import log_message, get_logger -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.utils import log_message, get_logger +from sensa.experiment import Experiment import json import numpy as np import logging @@ -92,7 +92,7 @@ def make_batch_predictions(input_params): outcome = input_params["outcome"] # Initialize Cortex Client - client = Cortex.client(api_endpoint=url, token=token, project=project) + client = Sensa.client(api_endpoint=url, token=token, project=project) # Read cortex connection details print("connection retreival started") diff --git a/spark-emr-batch-prediction/emr-container-image/src/requirements.txt b/spark-emr-batch-prediction/emr-container-image/src/requirements.txt index bc708846..88a24126 100644 --- a/spark-emr-batch-prediction/emr-container-image/src/requirements.txt +++ b/spark-emr-batch-prediction/emr-container-image/src/requirements.txt @@ -1,6 +1,6 @@ numpy pandas==1.0.5 -cortex-python==6.3.1a1 +sensa-python==6.5.0 catboost==0.25.1 scikit-learn==0.24.0 diff --git a/spark-emr-batch-prediction/requirements.txt b/spark-emr-batch-prediction/requirements.txt index ace6ef42..e310a69f 100644 --- a/spark-emr-batch-prediction/requirements.txt +++ b/spark-emr-batch-prediction/requirements.txt @@ -1,3 +1,3 @@ boto3==1.21.4 botocore==1.24.4 -cortex-python==6.3.1a1 \ No newline at end of file +sensa-python==6.5.0 \ No newline at end of file diff --git a/spark-emr-batch-prediction/submit-job.py b/spark-emr-batch-prediction/submit-job.py index e3512742..ad1a20bd 100644 --- a/spark-emr-batch-prediction/submit-job.py +++ b/spark-emr-batch-prediction/submit-job.py @@ -10,8 +10,8 @@ import sys import os -from cortex import Cortex -from cortex.experiment import Experiment +from sensa import Sensa +from sensa.experiment import Experiment def get_runtime_args(config): @@ -117,7 +117,7 @@ def add_step(cluster_id, emr_client, args, script_path, input_params): experiment_name = input_params["properties"]["experiment-name"] run_id = input_params["properties"]["run-id"] - client = Cortex.client(api_endpoint=url, token=token, project=project) + client = Sensa.client(api_endpoint=url, token=token, project=project) result = client.experiments.get_experiment(experiment_name) experiment = Experiment(result, client.experiments) run = experiment.get_run(run_id) diff --git a/synchronous-agent-invocation/client-callback-example/src/main.py b/synchronous-agent-invocation/client-callback-example/src/main.py index ac285937..15d17f86 100644 --- a/synchronous-agent-invocation/client-callback-example/src/main.py +++ b/synchronous-agent-invocation/client-callback-example/src/main.py @@ -8,8 +8,8 @@ import json import requests -from cortex.utils import generate_token -from cortex.env import CortexEnv +from sensa.utils import generate_token +from sensa.env import SensaEnv def read_config() -> dict: @@ -26,7 +26,7 @@ def get_env_details(): Environment Details from Local Cortex Configure :return: host, token, project """ - env = CortexEnv() + env = SensaEnv() token = generate_token(env.config) host = env.api_endpoint project = env.project diff --git a/synchronous-agent-invocation/requirements.txt b/synchronous-agent-invocation/requirements.txt index ec5c9ec8..b1fb3db1 100644 --- a/synchronous-agent-invocation/requirements.txt +++ b/synchronous-agent-invocation/requirements.txt @@ -1,5 +1,5 @@ requests==2.31.0 -cortex-python==6.3.1 +sensa-python==6.3.1 websocket-client==1.2.1 fastapi==0.68.1 uvicorn==0.15.0 \ No newline at end of file diff --git a/synchronous-agent-invocation/sync-true-example/src/main.py b/synchronous-agent-invocation/sync-true-example/src/main.py index 1334c855..f95ea05c 100644 --- a/synchronous-agent-invocation/sync-true-example/src/main.py +++ b/synchronous-agent-invocation/sync-true-example/src/main.py @@ -8,8 +8,8 @@ import requests import os.path -from cortex.utils import generate_token -from cortex.env import CortexEnv +from sensa.utils import generate_token +from sensa.env import SensaEnv def read_config() -> dict: @@ -26,7 +26,7 @@ def get_env_details(): Environment Details from Local Cortex Configure :return: host, token, project """ - env = CortexEnv() + env = SensaEnv() token = generate_token(env.config) host = env.api_endpoint project = env.project diff --git a/synchronous-agent-invocation/web-socket-client/main.py b/synchronous-agent-invocation/web-socket-client/main.py index 5828f518..8a404c37 100644 --- a/synchronous-agent-invocation/web-socket-client/main.py +++ b/synchronous-agent-invocation/web-socket-client/main.py @@ -9,8 +9,8 @@ import json import urllib.parse from threading import Thread -from cortex.utils import generate_token -from cortex.env import CortexEnv +from sensa.utils import generate_token +from sensa.env import SensaEnv def on_message(ws, message): @@ -77,7 +77,7 @@ def get_env_details(): Environment Details from Local Cortex Configure :return: host, token, project """ - env = CortexEnv() + env = SensaEnv() token = generate_token(env.config) host = env.api_endpoint project = env.project diff --git a/synchronous-agent-invocation/web-socket-client/requirements.txt b/synchronous-agent-invocation/web-socket-client/requirements.txt index 237c4927..21fc4fb6 100644 --- a/synchronous-agent-invocation/web-socket-client/requirements.txt +++ b/synchronous-agent-invocation/web-socket-client/requirements.txt @@ -1,2 +1,2 @@ websocket-client==1.2.1 -cortex-python==6.0.3 \ No newline at end of file +sensa-python==6.0.3 \ No newline at end of file