top of page

Use Cases

Drill down to see what use cases Purgo AI's Agent can handle.

Search by Business Function

Accelerate Drug Discovery
  • Genetic Target Identification

  • QSAR Modeling (Quantitative Structure Activity Relationship)

  • Geneformer Modeling (Gene Expressions & Network Biology)

  • Image Classification (eg. Digital Path.)

  • Chromatography Insights

Streamline Clinical Development
  • Clinical Trial Protocol Design

  • Clinical Trial Site Selection

  • Drug Repurposing

  • Automate QA of Clinical Data

  • Modernize Clinical Data Repository

Build a FAIR Data Platform
  • Knowledge Graphs for R&D

  • Medical Image Processing & Management (Pixels for DICOM)

  • Omics Data Management

  • Research Assistant

  • BioMedical Information Retrieval

View by Business Solution

Automate QA of Clinical Data

Implement Dynamic Data Quality Scorecard for Weekly Metric
Introduction: *Dynamic Data Quality Scorecard* that automatically evaluates weekly NBR data against predefined validation rules and produces a summary scorecard for pipeline monitoring. The scorecard must support multiple validations such as schema completeness, null checks, duplicate week detection, date boundary checks.

Requirements: Execute the SQL query linked to {metric_name = total_NBRx}) from the "{indication_config'}} table and retrieve the dataset. Compare
SQL result set with {indication_level_metric}} and run below quality validations: schema completeness, null validation, duplicate week validation, date continuity check (7-day window).Assign *Pass / Fail* for each validation rule along with aggregated score (0-100). Display the scorecard output with the following columns: {{Validation_Name}}, {{Validation_Status}), {{Validation_Comments}}. {Validation_Name}} should have the column of 'schema completeness', 'Null Validation', 'Duplicate Week Validation', 'Date Continuity Check'. {Validation_Status)} should have the column PASS/FAIL. Finally provide the overall score for the validations.
Implement Dynamic Data Quality Scorecard for Weekly Metric

Introduction: *Dynamic Data Quality Scorecard* that automatically evaluates weekly NBR data against predefined validation rules and produces a summary scorecard for pipeline monitoring. The scorecard must support multiple validations such as schema completeness, null checks, duplicate week detection, date boundary checks.

Requirements: Execute the SQL query linked to {metric_name = total_NBRx}) from the "{indication_config'}} table and retrieve the dataset. Compare
SQL result set with {indication_level_metric}} and run below quality validations: schema completeness, null validation, duplicate week validation, date continuity check (7-day window).Assign *Pass / Fail* for each validation rule along with aggregated score (0-100). Display the scorecard output with the following columns: {{Validation_Name}}, {{Validation_Status}), {{Validation_Comments}}. {Validation_Name}} should have the column of 'schema completeness', 'Null Validation', 'Duplicate Week Validation', 'Date Continuity Check'. {Validation_Status)} should have the column PASS/FAIL. Finally provide the overall score for the validations.

Automate Reporting (e.g., OTIF)

AD-344: Aggregate REP_SHARE_OF_VOICE Metrics by Reporting Date for Repatha
*Requirements*: Develop SQL Logic, Read the table 'stitched_pmr_consolidation_data' and extracts Share of Voice (SoV) metrics specifically for the product *Repatha*. The logic must filter the dataset to include only records where *metric = 'REP_SHARE_OF_VOICE'*, *frequency = 'QUARTERLY'*, *speciality = 'ALL'*, and *product_name = 'REPATHA'*. Additionally, the results must be restricted to reporting dates on or after *September 1, 2023*. After applying these filters, the SQL logic should group the dataset by *product_name* and *time_stamp*, and compute the *SUM(value)* for each group to derive the total Share of Voice for Repatha per reporting period. The final transformed output should expose three fields — *product_name*, *time_stamp*, and *total_share_of_voice* — and must be published through a finalized view named *stitched_pmr_consolidation_data_vw*.

*Final Output*: Show the results.

Unity Catalog: stitched_pmr_consolidation_data_vw
AD-342: Customer_360 - Fix CPD Value Inflation Issue Across Brands for BLUE BALL Sales Team

*Introduction*: Users observed that *CPD (Calls Per Day)* values are appearing doubled or inflated across multiple brands and quarters in the dashboards. When the backend table s_field_reporting_activity_sales_team_brand_interactions_intmd_union_all was checked, the CPD values were already inflated at the source. It is focuses on validating the source calculations and identifying the correct CPD computation using the formula CPD = rep_calls_total_calls / rep_calls_total_days_in_territory. The team needs a query to reproduce the issue and verify if the inflated values match the dashboard output and source table.

*Requirements*: Develop SQL Logic, Read the file c360_s_field_report_activity_sales_team_brand_interactions_intmd_union_all.  Filter the dataset for the following values, business_unit_code = *BCBU,* sales_team_grouping = *BLUE BALL,* classification_type = *BLUE BALL ALL PORTFOLIO TARGET,* interaction_channel = *ALL,* time_bucket_id = *CQTDW,* classification_value IN (‘EB’, ‘ED’, ‘ALL_TARGETS_PROFS’) Use the CPD formula as  Total_CPD = SUM(rep_calls_total_calls / rep_calls_total_days_in_territory). Round off CPD value to *one decimal*. Group data by all non-aggregated columns. Return the dataset so the inflated CPD values can be compared with dashboard values.

Unity catalog: c360_s_field_report_activity_sales_team_brand_interactions_intmd_union_all

bottom of page