Legacy Edge Logo

Category: AI Tools

  • How to Prepare Sales Data for an AI Pricing Model

    How to Prepare Sales Data for an AI Pricing Model

    Introduction: Why Clean, Well-Prepared Data Is the Secret Ingredient in AI Pricing

    As distributors across every industry look to gain a competitive edge, AI-powered pricing models are becoming one of the most powerful tools available. These models can uncover hidden patterns in historical transactions, predict customer sensitivity to price changes, and recommend optimized prices that protect margin while staying competitive.

    But before an algorithm can learn anything, it needs clean, well-structured data. Most distributors already sit on a goldmine of information — product catalogs, customer order histories, cost data, and supplier terms — yet these valuable records are often scattered, inconsistent, or incomplete. That’s why the first step in building a successful model is learning how to prepare sales data for an AI pricing model.

    In this article, we’ll walk through how to collect, clean, and enhance your existing sales and operations data so it’s ready for machine learning. By the end, you’ll understand which data sources matter most, how to transform them into model-ready inputs, and how better data can translate directly into smarter, more profitable pricing decisions.

    The Data Distributors Already Have (and Why It’s a Goldmine)

    The good news for most distributors is that the foundation for an AI pricing model is already sitting in your systems — it just needs to be unlocked. Every quote and sales order tells a story about what your customers value, what they’re willing to pay, and how your prices perform in the market. By gathering this information into a clean, structured dataset, you can train a machine learning model to detect patterns that no human could ever spot at scale.

    Here are some of the most valuable types of data that distributors already possess:

    • Sales transactions: Each line item — product, gross profit margin, and customer — forms the backbone of your training data. These records show how price interacts with real-world buying behavior.
    • Product information: Descriptions, SKUs, product categories and groups, stock or non-stock status, and cost data help the model understand relationships between products and margin structures.
    • Customer data: Attributes such as industry, region, customer tier and/or customer type (e.g., contractor vs. OEM) allow the model to personalize pricing recommendations.
    • Supplier and cost data: Fluctuating supplier prices and terms can be key variables when predicting optimal selling prices.
    • Historical quotes and win/loss data: This often-overlooked data is extremely valuable for understanding price sensitivity and competitive dynamics. Valuable information includes quoted lead times, quantity on hand at the time of quote, quantity quoted, outcome (won or lost) and salesperson.
    • Seasonality and time-based data: Sales patterns by month, quarter, or season help the model adjust for demand cycles.

    When all of this information is combined, it becomes a dynamic pricing engine waiting to happen. The next step is ensuring that this data is clean, consistent, and machine-readable — which is where the real work (and value) begins.

    Cleaning and Normalizing: Turning Raw Data into Model-Ready Input

    Raw sales data, no matter how rich, is rarely ready for machine learning. It’s full of duplicates, missing fields, inconsistent formats, and outdated records. Before your AI model can recognize pricing patterns, it first has to trust the data it’s being trained on. That’s why data cleaning and normalization are so critical — they transform messy sales records into a structured, reliable dataset that a pricing algorithm can actually learn from.

    Here are the most important steps in preparing distributor data for an AI pricing model:

    • Remove duplicates and errors. Repeated invoice lines or miskeyed prices can distort model training. Even a few outliers can cause the model to learn incorrect pricing relationships.
    • Handle missing or incomplete data. When costs, quantities, customer category, product category or dates are missing, use business logic or statistical methods to fill gaps — or remove unusable rows entirely. Consistency is more important than volume.
    • Fix or remove records with invalid data (prices or costs at or below zero, negative lead-time days, negative quantities, etc.) Do not include records for internal transactions (quote or sale records for internal transfers, for example.)
    • Normalize units and currencies. Distributors often sell the same product in different units (e.g., cases, boxes, or singles). Convert all transactions into a common base unit and currency so the model can compare apples to apples.
    • Align product and customer identifiers. Standardize SKUs, product categories, and customer IDs across all systems (ERP, CRM, quoting tools). A single, unified key for each entity prevents confusion during model training.
    • Tokenize categorical data. Many AI models can’t directly read text fields like “Region = Midwest” or “Customer Type = Contractor.” Tokenization — assigning numeric or encoded values — allows these labels to become usable inputs.
    • Group numerical fields into bins. Continuous fields such as “quantity sold” or “order size” can be bucketed into ranges (e.g., 1–10, 11–50, 51–100) to help the model identify threshold effects, such as volume-based discount behavior.
    • Detect and treat outliers. An occasional “$0.01” sale or “10,000-unit” order can throw off training results. Flag and investigate these before feeding them into your model.
    • Remove any quotes or sales that are pre-determined (sales based on pre-agreed contracts or price sheets, for example.)
    • Remove any quotes for items that have never been won as the model might be overly aggressive when attempting to price these items.

    By the end of this stage, your raw data becomes a standardized, trustworthy foundation. Only then can it reveal the true signals behind pricing performance — signals that a well-trained AI model can amplify into real margin improvement.

    Feature Engineering for Better Predictions

    Once your data is clean and consistent, the next step is to make it more informative. Feature engineering is the process of transforming raw data into new variables (or “features”) that help your AI pricing model recognize the subtle factors influencing customer behavior.

    Think of it as giving the model more context — the same way an experienced sales rep instinctively knows that a contractor ordering 1,000 units in May behaves differently from a retail customer ordering ten units in December.

    Here are some practical ways distributors can enhance their datasets through feature engineering:

    • Create ratio-based features. Calculating fields such as margin percentage, discount from list price, or average revenue per customer helps the model see relationships that aren’t obvious in raw sales data.
    • Add time-based context. Derived features like days since last purchase, month of year, or season capture repeat buying patterns and seasonal demand.
    • Segment by customer and product attributes. Creating flags or encoded values such as “key account”, “preferred supplier”, or “new product launch” gives the model behavioral cues.
    • Aggregate transactional history. Summarizing data into higher-level metrics — like average order size or total spend per quarter — helps smooth out noise and reveal long-term trends.
    • Use tokenized and bucketed fields. Earlier steps like tokenizing categories or binning order quantities now become the building blocks for modeling how price elasticity changes across segments.

    Good feature engineering transforms your sales database from a record of past transactions into a simulation of your market dynamics. When these enhanced features are used to train your AI pricing model, it doesn’t just learn what happened — it begins to infer why.

    From Raw Data to Model-Ready: An Example Schema

    To make this more tangible, let’s look at how typical distributor data evolves from raw quotes to model-ready training data.

    Typical raw quote data:

    Quote IdQuote DateCustomer IdProduct IdQty QuotedUoMQuoted GPSalesPerson Id
    10012029-01-02CUST-001PROD-010100EA13.5%SALES-100
    10022029-01-02CUST-002PROD-0101CASE13.2%

    Normalize and populate missing data:

    Quote IdQuote DateCustomer IdProduct IdQty QuotedUoMQuoted GPSalesPerson Id
    10012029-01-02CUST-001PROD-010100EA13.5%SALES-100
    10022029-01-02CUST-002PROD-010250EA13.2%SALES-104

    Link related sales orders to identify won and lost quotes:

    Quote IdQuote DateCustomer IdProduct IdQty QuotedUoMQuoted GPSalesPerson IdOutcome
    10012029-01-02CUST-001PROD-010100EA13.5%SALES-100lost
    10022029-01-02CUST-002PROD-010250EA13.2%SALES-104won

    Add additional details about the customer, product, etc:

    Quote IdQuote DateCustomer IdProduct IdQty QuotedUoMQuoted GPSalesPerson IdOutcomeCustomer TierCustomer RegionProduct Category
    10012029-01-02CUST-001PROD-010100EA13.5%SALES-100lost1WestFasteners
    10022029-01-02CUST-002PROD-010250EA13.2%SALES-104won3NorthBolts

    Enhance & Engineer the data:

    Quote IdQuote DateCustomer IdProduct IdQty QuotedUoMQuoted GPSalesPerson IdOutcomeCustomer TierCustomer RegionProduct CategoryQty BucketDiscount from ListMonth
    10012029-01-02CUST-001PROD-010100EA13.5%SALES-100lost1WestFasteners0-1005%1
    10022029-01-02CUST-002PROD-010250EA13.2%SALES-104won3NorthBolts101-50012%1

    At this stage:

    • Row-level ratios like margin% and discount% are new columns in the same table.
    • Aggregated metrics (e.g., customer lifetime value) may be calculated separately and merged back in by key (e.g., customer_id).
    • Tokenized fields allow categorical data to be processed numerically.
    • Bucketed fields (like quantity ranges) help the model learn threshold effects such as volume discounts.

    The result is a flattened, model-ready table where each row represents one transaction, but each column encodes valuable business knowledge. This is what modern AI pricing models are trained on — a single, rich, structured dataset that reflects both transactional detail and business context.

    Enhancing Data with External Context

    Even the cleanest internal dataset can only describe what’s already happened inside your business. To train an AI pricing model that reacts to the market, not just your history, you’ll want to enrich your data with external signals. These contextual factors help the model recognize the why behind pricing shifts—things like seasonality, supplier volatility, or regional demand patterns.

    Here are a few powerful types of external data you can integrate:

    • Market and commodity indexes. For distributors whose costs depend on raw materials (steel, copper, resin, etc.), linking supplier prices to public commodity indexes gives the model a real-world cost baseline.
    • Freight and logistics costs. Adding average freight rates or fuel costs by region can help the model understand variations in delivered pricing and margin erosion.
    • Economic indicators. Regional GDP growth, interest rates, or housing starts can all influence industrial demand. Including these variables lets your model anticipate pricing pressure before it shows up in sales data.
    • Weather and seasonality. For sectors tied to climate (HVAC, landscaping, construction materials), temperature or precipitation data can reveal when demand spikes are most likely.
    • Competitor or market pricing. Even limited competitive intelligence—such as average market prices from a benchmarking service—helps the model learn where your price points sit in context.

    When combined with your cleaned and engineered sales data, these external signals transform your pricing model from a reactive tool into a forward-looking one. The model can then spot correlations your teams might miss, like how freight volatility or regional construction activity subtly shifts price sensitivity.

    Ultimately, data enrichment bridges the gap between your transactional reality and the economic environment you operate in. That’s where the predictive power of AI becomes a genuine strategic advantage.

    Data Governance and Ongoing Maintenance

    Preparing your data for an AI pricing model isn’t a one-time task — it’s an ongoing discipline. The moment you start training models, your data pipeline becomes part of your daily operations. If the data feeding the model degrades, so will the model’s accuracy and trustworthiness.

    Here are the key practices every distributor should adopt to keep their data healthy:

    • Standardize data entry and definitions. Ensure that all departments use consistent product categories, customer classifications, and units of measure. Small inconsistencies compound quickly in large datasets.
    • Monitor for data drift. Over time, market conditions and internal processes change. Regularly compare current data distributions (like average margins or order sizes) to historical ones to spot shifts that may require retraining the model.
    • Schedule periodic audits. Review data integrity quarterly or semiannually. This might include sampling transactions for accuracy, checking for missing fields, and validating that external feeds (like commodity prices) are still updating correctly.
    • Document data lineage. Keep a record of where each data source originates, what transformations are applied, and who owns it. This transparency makes troubleshooting and compliance far easier down the line.
    • Retrain the model on a schedule. Even a perfectly prepared dataset becomes outdated as the market evolves. Set a cadence for retraining your AI pricing model—monthly, quarterly, or annually—depending on your sales volume and industry volatility.

    Strong governance ensures that the effort you put into collecting, cleaning, and enhancing your data continues to pay off. Over time, this steady flow of reliable, enriched information becomes your most valuable competitive asset — powering not just pricing optimization, but smarter forecasting, inventory management, and customer insights across the business.

    Conclusion: Turning Clean Data into Profitable Intelligence

    For distributors, success with artificial intelligence begins long before the first line of code. The real magic happens when clean, consistent, and enriched data becomes the foundation for smarter decision-making. By collecting, preparing, and enhancing your sales data, you’re not just creating a dataset — you’re building a digital model of how your market behaves.

    With that foundation in place, you’re ready to take the next step: transforming your prepared data into a working AI pricing model. In the next article — How to Build an AI Pricing Model Using Machine Learning in Python — we’ll walk through how to feed this data into a machine learning framework, train your model, and start generating optimized pricing recommendations that boost both margin and competitiveness.

    Investing in data preparation today means unlocking long-term pricing intelligence tomorrow — a true strategic edge in an increasingly data-driven distribution landscape.

  • Creating a custom private chatbot for your Business

    Creating a custom private chatbot for your Business

    How to create an agent within Microsoft 365 with proprietary knowledge sources for answering questions

    In every organization, knowledge lives in scattered places — tucked inside PDFs, SharePoint folders, policy manuals, and the collective minds of employees. The problem isn’t a lack of information; it’s the daily friction of finding it. That’s where creating a custom chatbot for your business powered by your company’s proprietary knowledge becomes transformative. Instead of digging through folders or pinging colleagues for answers, employees can simply ask: “What’s our policy for client data retention?” or “How do I reset the field service tablet?” and get an instant, accurate reply drawn directly from the organization’s verified sources.

    This kind of system doesn’t just save time; it elevates the reliability of information. By connecting the chatbot to internal documents, you ensure that every answer reflects the company’s latest guidance — not the internet’s best guess. Microsoft’s Copilot Studio makes this easier than ever, allowing you to build custom agents that understand your domain, your language, and your processes. Think of it as creating your own ChatGPT — except it knows your business.

    It’s worth noting that Copilot Studio agents are, by design, internal tools. They live inside your Microsoft 365 environment and are accessible only to authenticated users within your organization. That limitation is intentional — it keeps sensitive company data secure. While Microsoft is gradually expanding options for public-facing deployment, the real power today lies in using these custom copilots to boost efficiency, reduce knowledge bottlenecks, and centralize institutional wisdom safely behind your firewall.

    In this post, we’ll walk through how to create a custom LLM-based agent using Microsoft’s Copilot Studio, upload your proprietary knowledge documents, and configure it to answer real business questions with authority — effectively turning your company’s documentation into a living, conversational assistant.

    What you’ll build

    A Microsoft agent that references internal/proprietary knowledge sources when answering questions.

    Prerequisite: Microsoft Office 365

    Step-by-step

    1. Create Your Custom Chatbot Agent

        In Microsoft’s Office 365, click on the ‘Create agent’ link located in the left-hand menu bar:

        The two-panel Agent configuration window will appear:


        On the left side is the Describe tab. Entering information into this tab will allow Copilot Studio to do some automatic setup behind the curtain. Here you can tell Copilot in natural language what kind of agent you want to build. Think of it as the agent’s “origin story.” You can start by typing a natural-language description (“I want an AI that answers customer service questions from my company’s manuals”) or select a template from Microsoft’s starter options, such as Career Coach, Customer Insights Assistant, or Idea Coach. Each template comes preloaded with example intents and tone settings, which can save time if your use case fits a common pattern.

        Entering information into the Describe tab isn’t mandatory, but it’s like having an AI co-designer for the first five minutes. Entering even a one-sentence goal (“Build an agent that answers technical support questions from my company’s service manuals”) can prime the system with an initial personality and intent structure. Skipping straight to Configure gives you a blank slate — ideal for advanced users who prefer full creative control, but slightly more work for beginners.

        The right side of the interface is a workspace preview for the agent that serves as a design canvas that will gradually fill in as the agent is configured. Later, as knowledge documents are added and additional instructions are given, it becomes a workspace where you can test and refine the agent’s behavior.

        2. Configure the agent

        Click on the Configure tab (just beside Describe). This is where you’ll complete the configuration and link your proprietary knowledge documents — PDFs, manuals, spreadsheets, or databases — so your custom LLM can answer questions using real company data rather than generic web knowledge. Here’s where you can make your agent shine with insider expertise.

        The Configure tab initially looks like this:

        On the left-hand side of this screen, you’ll see three main sections that matter most:

        Template Dropdown

        At the very top, you’ll find the Template option. If you selected one earlier (like Career Coach or Customer Insights Assistant), you’ll see it listed here. Templates act as preconfigured blueprints that come with example intents and tone settings. Choosing None gives you a blank slate — ideal for building a fully custom agent powered by your company’s documents and internal data.

        Pro Tip: If your use case is highly specific — such as a compliance advisor or technical support agent for internal systems — start with “None.” This avoids baked-in behaviors that can conflict with your proprietary content. Templates are best for quick prototypes, not precision builds.

        Details Panel (Name and Description)

        This is where your agent’s public identity takes shape.

        • Name: The title your users will see when they interact with the chatbot. Choose something meaningful — for example, Legacy Edge Knowledge Assistant or HR InfoBot.
        • Description: A one-sentence summary that tells users what the agent does. This description also guides the underlying model: it acts as metadata to prime its understanding of its domain and purpose.
        • Icon: Click the pencil button to change the icon that is shown for the agent.

        Pro Tip: Before writing your agent’s description, spend a few minutes crafting a mission statement in plain English — one or two sentences that describe what your agent should know and how it should respond. Write the description as if your’re briefing a new employee. The clearer the description, the sharper the responses. Keep the description under two sentences and use active verbs (“provides”, “retrieves”, “summarizes”). This helps both Copilot Studio and human users instantly understand the agent’s scope — and prevents it from over-generalizing into unrelated areas.

        Instructions Panel

        This large text field is the most powerful section — it defines how your agent thinks and behaves. Here you’ll describe:

        • What the agent should do
        • How it should sound (professional, conversational, technical, etc.)
        • Any rules or boundaries it must follow

        In essence, this is your agent’s system prompt — the foundational guidance the LLM (large language model) reads before interacting with users. Microsoft calls this “Instructions,” but think of it as the “mission script” for your AI.

        Example:

        “You are an internal support assistant for Legacy Edge. Your job is to answer employee questions using official company knowledge documents. Always cite the relevant document when responding. If a question is outside your scope, respond: ‘I don’t have that information available internally.’ Maintain a helpful and factual tone.”

        Pro Tip: Treat the Instructions like programming logic written in plain English. Every word affects how the agent behaves. Avoid vague directions like “Be smart and friendly.” Instead, define purpose, style, and boundaries clearly — especially when the agent handles proprietary data.

        What Happens Next

        Once you’ve completed these fields, you’re ready to click Create in the top-right corner. That’s the moment when Copilot Studio actually instantiates your agent — giving it a workspace and enabling the right-hand chat window for live testing. From there, you’ll move on to uploading your internal knowledge documents and fine-tuning how the agent retrieves and summarizes information.

        Note: if you click an option that takes you away from the configuration screen after the chatbot has been created, you can always return to the configuration screen by clicking the three dots next to the name of your chatbot (your new chatbot should appear in the left-hand menu under ‘Agents’) and selecting Edit.

        3. Add knowledge sources

        Once your agent has a name, description, and behavior defined, it’s time to give it something to know. The Knowledge section is where you connect your internal data sources — the documents, files, or URLs that the chatbot will draw from when answering questions.

        When you click inside the Knowledge field, you’ll see two main options:

        1. Upload or link content directly, such as PDFs or Microsoft Office files.
        2. Reference existing online sources, such as OneDrive, SharePoint, or public URLs.

        Each approach has its strengths, and the best option depends on how dynamic your information is.

        Uploading Files Directly

        Uploading files gives your chatbot a static snapshot of the knowledge at the time of upload. This is ideal for material that doesn’t change often — things like standard operating procedures, troubleshooting guides, or technical reference manuals.

        • Pros: Simple setup, fast ingestion, no dependency on external systems.
        • Cons: You’ll need to re-upload documents when content changes.

        Linking to OneDrive or SharePoint

        Referencing cloud-based storage (OneDrive or SharePoint) lets your Copilot dynamically access the latest versions of those files. This is the better choice if your documentation is frequently updated by multiple teams.

        • Pros: Always current, centralizes updates, ideal for shared corporate documentation.
        • Cons: Requires consistent access permissions; if the file is moved or renamed, the link breaks.

        Pro Tip: When you upload files directly, those documents are ingested and indexed locally and pre-processed into searchable text embeddings that are optimized for retrieval. This will result in faster response times (by one or two seconds.) Usually a hybrid approach is best — upload your static core documents that do not change frequently (SOPs, policy documents, troubleshooting guides) and link dynamic resources like “Known Issues” spreadsheets or service logs stored in SharePoint.


        What Types of Documents Work Best

        Copilot Studio supports a variety of document formats, but not all files are equal in usefulness. Here’s how they stack up:

        • PDFs: Excellent for finalized procedures, policies, and manuals. They preserve layout and structure, making them easy for the model to parse.
        • Word Documents (.docx): Great for technical explanations, onboarding guides, or FAQs. They’re readable and editable, so you can easily update knowledge content.
        • Excel Spreadsheets: Useful for structured data — such as error code lookups, pricing matrices, or configuration values. However, keep them simple; the model reads cell text, not formulas or visuals.
        • PowerPoint Files: Fine for summarizing visuals or presenting workflows, but only the text portions (titles and speaker notes) are parsed.
        • URLs: Handy if your company hosts documentation externally (like a support portal or documentation site). Be sure the pages don’t require authentication; Copilot can only read public URLs.

        Pro Tip: Avoid uploading raw exports, complex tables, or engineering drawings. Instead, upload or link documents that explain those materials in text — for example, a “Product Configuration Reference Guide” is far more useful than a schematic file with no context.


        Choosing the Right Knowledge Sources

        When deciding what to feed your Copilot, prioritize documents that provide context and clarity. Good knowledge candidates include:

        • Standard Operating Procedures (SOPs) — Yes. These are gold; they define the official way things are done.
        • Employee Manuals — Only if your chatbot handles HR or policy questions. Otherwise, they clutter the knowledge base.
        • Technical Specifications — Excellent if they contain plain-language explanations and troubleshooting details.
        • Product Catalogs — Useful for sales or support bots, but less valuable if they’re mostly images or SKU tables without text descriptions.
        • Drawings or Engineering Blueprints — Generally not effective; the model can’t interpret images or CAD formats.

        Pro Tip: Before uploading, skim each document and ask, “Would I hand this to a new hire to help them solve a problem?” If the answer is yes, it probably belongs in your chatbot’s knowledge base.


        Prioritizing Knowledge Sources

        After adding your files or links, toggle “Prioritize the knowledge sources you added for agent knowledge-based queries” to on. This instructs the agent to rely on your internal content first before using its general reasoning abilities — a crucial setting for keeping responses accurate and aligned with company policy.

        Once your documents are added and prioritized, your Copilot now has a mind of its own — filled with Legacy Edge’s proprietary knowledge, ready to deliver accurate, document-sourced answers.

        Pro Tip: Keep your uploaded documents clean and well-labeled; Microsoft’s ingestion process works best when file names and content headers clearly reflect the topics they cover. Think “Product_FAQ_June2025.pdf” instead of “final_v3_reallyfinal.pdf.”

        Enabling Capabilities and Suggested Prompts

        Once your knowledge documents are connected, scroll down to the Capabilities and Suggested Prompts sections. These settings shape how your Copilot processes information and how users interact with it.


        Capabilities

        The Capabilities section controls the advanced tools your chatbot can use. Currently, Copilot Studio offers two main options: Code Interpreter and Image Generator.

        Code Interpreter

        When this option is enabled, your agent can perform lightweight calculations, analyze data, or process snippets of code. For technical or engineering environments, this can be incredibly useful — for example, parsing error logs, formatting configuration strings, or doing quick math related to system parameters.

        • When to use it: Enable it if your chatbot will handle any queries involving formulas, data analysis, or troubleshooting automation scripts.
        • When to skip it: Leave it off if your agent deals strictly with procedural knowledge or policies (it adds no benefit to general knowledge lookup tasks).

        Example query that would need Code Interpreter:

        User: “What’s the average system response time if server A reports 310ms and server B reports 270ms?”

        EdgeAssist: “The average is 290 milliseconds.”

        Pro Tip: Activating the Code Interpreter doesn’t make the agent a full Python environment — it’s more like a quick analytical assistant. Keep it on for demos involving data-driven reasoning, but test responses carefully before deploying it to production.


        Image Generator

        Turning this on allows your Copilot to generate simple images — useful in creative or presentation contexts. For an internal technical support bot like EdgeAssist, it’s not essential, but you might enable it to demonstrate how Copilot can illustrate workflows, diagrams, or UI steps.

        • When to use it: Helpful for documentation, onboarding, or training use cases.
        • When to skip it: Disable for performance-critical technical support bots — image generation can add unnecessary complexity.

        Suggested Prompts

        The Suggested Prompts section helps guide users by showing examples of what they can ask the chatbot. Think of this as a built-in conversation starter menu — especially helpful for new users who aren’t sure what to type.

        When you click “Add a suggested prompt,” you can pre-fill several example questions. These prompts don’t just improve usability; they quietly teach your users the kinds of queries your chatbot is designed to handle.

        Examples:

        • “How do I fix a timeout error in the automation module?”
        • “What are the steps to reset a client’s API key?”
        • “Where can I find the latest release notes for the analytics engine?”
        • “What causes a configuration mismatch during deployment?”

        Pro Tip: Use this feature like a mini onboarding experience and show a variety of uses. For example, include one prompt that demonstrates the agent’s range (“Show me how to configure a new client system”), one that shows its precision (“What’s the fix for error code 421?”), and one that shows its ability to answer an often-asked non-technical question (“What should I do if documentation doesn’t cover a client issue?”).

        The Suggested Prompts appear when users open a new chat, reducing the learning curve and encouraging meaningful engagement from the start.


        Wrapping Up This Step

        The Capabilities and Suggested Prompts features may seem small, but they’re what turn a static assistant into an interactive teammate. Enabling capabilities extends what your Copilot can do, while thoughtful suggested prompts guide users toward what it should do best.

        At this stage, your internal chatbot can reason, retrieve knowledge, and communicate clearly with your team. The next step is deployment: deciding where employees can access it (in Teams, on your internal portal, or embedded within your support dashboard).

        3. Share the Agent

        Once you’ve configured the Agent’s knowledge, capabilities, and prompts, it’s time to make it available to your team. Publishing in Microsoft Copilot Studio is straightforward, but understanding the access options ensures your chatbot stays secure and reaches the right audience.


        Publishing the Agent

        At the top right of the configuration screen, you’ll see the Publish button (or Update, if you’ve already published once). Clicking this triggers Microsoft Copilot Studio to build and deploy your agent inside your Microsoft 365 environment.

        Once published, your chatbot is ready to use — but by default, it’s internal only. That means it’s available to authenticated users within your Microsoft Entra (formerly Azure Active Directory) tenant.

        After publishing, you can access your deployment options under the Share button (next to Update).


        Sharing Options

        When you click Share, Copilot Studio will show you several ways to make your agent accessible:

        1. Microsoft Teams Integration

        This is the most common deployment path for internal copilots. With a single click, you can add EdgeAssist to your organization’s Teams environment. Users can chat with it just like they would with a colleague — perfect for quick troubleshooting or referencing technical documentation mid-meeting.

        • Pros: Frictionless access; no separate login; easy for employees to adopt.
        • Cons: Limited visibility outside Teams unless embedded elsewhere.

        2. Web App Access

        Copilot Studio also lets you deploy your chatbot as a standalone web app — a simple, branded chat interface your employees can access via a unique URL.

        • Pros: Works anywhere within your organization; easy to bookmark or embed in an internal support portal.
        • Cons: Still requires Microsoft login; cannot be accessed anonymously by external users.

        3. Embedding via Power Pages or Internal Websites

        For a more integrated experience, you can embed your chatbot directly into your company’s intranet or Power Pages site using an iframe or web component provided by Microsoft.

        • Pros: Seamless integration into existing systems or dashboards.
        • Cons: Requires Power Platform permissions and possibly some admin setup.

        About Public Access

        At this time, Microsoft Copilot Studio is designed primarily for internal copilots. That means public, anonymous access is turned off by default. While Microsoft has begun rolling out limited preview features for anonymous sharing, most organizations (especially those handling proprietary data) will want to keep this feature disabled for security reasons.

        For demonstration or testing with external partners, you can manually invite specific Microsoft accounts with limited permissions — just make sure those users are added as “guests” to your tenant.

        Pro Tip: Keep your first deployment internal. Once your team has used the chatbot and verified that responses are accurate, you can later publish an external-facing version (for clients or website visitors) using Microsoft Power Pages with controlled access.


        Verifying the Chatbot

        After publishing, click New Chat in the right-hand preview pane to test it in action. Ask a few real-world questions drawn from your uploaded knowledge base to confirm:

        • It retrieves the correct documents.
        • Responses are accurate and concise.
        • The tone and scope align with your “Instructions.”

        If any answers seem off, go back to the Configure tab, adjust the instructions or add more knowledge sources, and hit Update to republish. You can iterate as often as needed — each publish creates a new version that instantly replaces the old one.

        Pro Tip: If the chatbot returns wrong information, chat with it until it realizes and admits that it made a mistake. If it doesn’t offer a reason why it made the mistake, ask it to explain why it made the mistake. Then ask it “Give me the exact instruction statement that I should add to this Agent’s Instructions so that this mistake never happens again.” Then, add this statement to the Agent’s instructions, Update it, and test it with more questions to confirm that the additional training statement helped the chatbot to be more accurate. Continue this pattern until the chatbot’s responses are acceptable.


        Wrapping Up

        Publishing marks the transition from design to deployment — from blueprint to working AI teammate. Whether it lives in Teams, on your internal portal, or embedded in your support dashboard, your custom chatbot now becomes a living resource: a searchable, conversational layer on top of your company’s collective expertise.

        Your next frontier? Expanding the chatbot’s role — connecting it to workflows, integrating it with service tickets, or even giving it multi-agent collaboration powers to route complex client issues automatically.