Automated Grant Feedback Pilot

Looking for complementary feedback on your next funding application?

Review each step of the data pipeline and the service agreements for the third-party vendors and platforms involved in the workflow.

To use automated grant feedback, you must:

The what, why, and how—an end-to-end overview of automated grant feedback and what to expect when you use the tool.

Chapters

  1. Why prioritize identifying weaknesses?
  2. How do I use Automated Grant Feedback?
  3. How accurate is Automated Grant Feedback?
  4. What does the output look like?
  5. What can I do with the output?
  6. How is using this tool different than using ChatGPT?
  7. How was Simulated Critical Peer Review trained?
  8. What are the underlying assumptions behind the design?

Western is pleased to pilot automated grant feedback through an AI tool that supports grant writing by "predicting" reviewers' critiques.

Quick Facts

  • The purpose of the automated grant feedback pilot is to provide an additional opportunity to preemptively identify potential weaknesses prone to decrease an application’s ranking.
  • The pilot is not intended to replace or circumvent any current departmental, faculty or institutional supports or processes.
  • The tool models reviewer critiques from past competitions as benchmarks for analyzing draft applications.
  • Automated grant feedback is an "on-demand" tool—an optional, self-serve resource to bolster application competitiveness.
  • The tool is powered by leading enterprise AI models from Anthropic, Google and OpenAI.

Collaboration

The automated grant feedback prototype is a unique and proprietary "made-at-Western" resource, built in collaboration with faculty members and research officers across campus.

Automated grant feedback is under continual development. To contribute to the project, contact James Shelley for more information about how you can help improve the tool.

Data Protection, Confidentiality & Privacy

We understand and appreciate that funding application data is extremely valuable and sensitive.

Our aim is to pioneer and experiment with frontier technology with an unwavering commitment to responsibility, security, transparency, and institutional policy compliance.

Towards this end, all data privacy, confidentiality, protection, and retention policies are described below.

Microsoft OneDrive

Our AI-powered review process begins by adding your file to a Microsoft OneDrive for Business folder in Western University’s Microsoft 365 tenant. The file will be automatically deleted when the review is complete, which means, as per Microsoft’s file retention policy, it will be completely removed from the platform in 93 days.

Microsoft Power Automate

The content of the file will be processed using Microsoft Power Automate, which is part of Microsoft’s Power Platform. Just like your Western Outlook account, we use Power Automate within Western’s corporate Microsoft tenant. Power Automate complies with the same data protection agreements that oversee all use of Microsoft 365 at Western. All internal Power Automate history logs (used for development and troubleshooting) are deleted in 28 days and removed from administrative audit logs in 90 days.

Cloudmersive

To efficiently process data, we use an encrypted connection with a third-party vendor called Cloudmersive to handle various aspects of data format conversion. Pursuant to their Terms of Service and Digital Processing Agreement, Cloudmersive APIs (Application Programming Interfaces) are stateless, which means that they do not store or retain payload data or copies after data conversion is complete.

Large Language Models

The simulated peer review process leverages three external Large Language Models (LLMs) which are provided by third-party vendors. All services are accessed using encrypted API calls from within Microsoft Power Automate. The data protection and retention policies of both vendors is described below:

OpenAI

The first LLM vendor is OpenAI. Our use of OpenAI’s services is governed by the OpenAI Business Terms of Enterprise privacy. Under these terms, OpenAI agrees to not use customer content to train or improve its models. OpenAI agrees to remove all log data after 30 days. OpenAI’s API Platform has attained SOC 2 Type 2 compliance.

Anthropic

The second LLM third-party vendor is Anthropic. Our use of Anthropic’s service is governed by Anthropic’s Commercial Terms of Service. Under these terms, Anthropic agrees to not train or improve its models with customer content. Anthropic agrees to automatically delete log data after 28 days. Anthropic has attained SOC 2 Type 2 compliance.

Google

The third LLM third-party vendor is Google. Our use of Google’s Cloud, API, and AI products are governed by Google’s Gemini API Additional Terms of Service and the Google Data Processing Addendum for Products Where Google is a Data Processor agreements. Under these terms, Google agrees to not train or improve its models on customer content or retain user content. Google Cloud maintains SOC 2 Type 2 compliance.

General Data Protection Regulation (GDPR)

All third party vendors are compliant with the European Union’s General Data Protection Regulation.

Technology Risk Assessment Committee (TRAC)

The Automated Grant Feedback (AGF) tool has passed Western University’s Technology Risk Assessment Committee (TRAC) pre-assessment and is approved for internal use. (TRA-208)

 

Contact

For more information about or how best to use Western's automated grant feedback pilot, please contact James Shelley