Skip to main content

How it Works

Conscript is a Microsoft Power Platform application built on Power Automate and Dataverse. The Dataverse is used to store information about conscripts and squads, configuration data, as well as agent tasks and memories. A series of Power Automate flows are used to provide the logic for the application.

Key Components

E-mail Handler

The Conscript E-mail Handler is a Power Automate flow template that is disabled out-of-the-box. When a new conscript is provisioned ("mustered"), an enabled copy of the template is created that will listen for incoming e-mails in that conscript's mailbox. Each conscript must have its own e-mail handler flow.

When a new e-mail arrives in the conscript's mailbox, the e-mail handler:

  • Determines if the e-mail originated from inside or outside the organization.
  • Calls the Tokenizer to determine the e-mail token length, and then calls the E-mail Summarizer if necessary to shorten the e-mail.
  • Calls the Context Builder to create the base context that will be sent in the System Message to OpenAI.
  • Appends the e-mail metadata and body (or summary) to the context.
  • Retrieves the conscript's summary of previous interactions with the e-mail sender and appends it to the context.
  • Calls the Function Builder to build a list of functions that can be used in the situation.
  • Passes everything to the Execution Agent to call OpenAI and process the responses.

Context Builder

The Context Builder is a Power Automate flow that creates the beginning of the "System Message" sent to OpenAI. Every request sent to the OpenAI backend begins with the same message containing basic information about the conscript, its squad, its Human Supervisor, and general instructions that are applicable to every request. It then passes the context back to the parent flow that called it.

Function Builder

The Function Builder is a Power Automate flow that pulls the list of functions from the Dataverse that are appropriate for the situation and returns it back to the parent flow that called it as a JSON object that will be passed directly to OpenAI.

Execution Agent

The Execution Agent is the primary Power Automate flow that calls the OpenAI Chat Completions API. It takes the context and functions provided by the parent flow and crafts a request to the API. It calls the Model Selector to determine which OpenAI model to use, and then calls the API. The API is instructed to only respond with function calls. When a function is called, it is passed to the Function Handler to execute. The response from the Function Handler is appended to the chat and returned to the API. This process continues repeatedly until the Conclude function is called or the maximum number of steps is reached.

Function Handler

The Function Handler is called every time a function call is made by the AI. If the function is configured to require approval, it seeks approval from the conscript's Human Supervisor via the native Power Platform Approvals workflow. It then looks up the definition of the function and executes it. If the function call resulted in an interaction with someone (an e-mail sent), it calls the Interaction Summarizer to summarize the interaction and stores it in the Dataverse. It then returns a response back to the Execution Agent.

E-mail Summarizer

The E-mail Summarizer is called whenever an e-mail is over a certain token length and therefore cannot be passed in its entirety to the OpenAI API. It strips down the e-mail and uses the OpenAI API to generate a summary of the e-mail that can be used in place of the full e-mail.

Interaction Summarizer

The Interaction Summarizer is called whenever a conscript interacts with someone by sending or receiving an e-mail. It uses the OpenAI API to generate a summary of the interaction and stores it in the Dataverse using the e-mail address interacted with as the key. If an existing summary exists, it is also provided to the API, making the new updated summary inclusive of all interactions with that e-mail address.

Model Selector

The Model Selector is called by every flow that calls the OpenAI API. It determines the appropriate OpenAI model to use in each request. It takes the pending request and passes it to the Tokenizer to determine the token length. It will then choose from one of the following models. GPT-3.5 models are only used when GPT-4 models are unavailable.

  • GPT-4
  • GPT-4-32K
  • GPT-3.5-TURBO
  • GPT-3.5-TURBO-16K

The model name is then returned back to the parent flow to be used in API calls.

Tokenizer

The Tokenizer is a Dataverse Custom API that calls a custom .NET Dataverse plugin. The plugin takes a string and returns the number of tokens it will require using the cl100k_base encoding. It is a completely self-contained .NET assembly that makes no external calls.