From Pen to Prompt: Revolutionizing Everyday Work with Copilots
Part of the `AI Tips for Success at Work` Series đ
Table of contents
- The days of being amazed by AI tools like ChatGPT are behind usâCopilots are here, they work, and theyâre transforming how we operate, especially within the Microsoft 365 ecosystem. Now is the time to move beyond the basics, unlock their full potential, and boost productivity.
- Behind âThe Whoaâ Factor
- From Pen to Prompt - An Introduction to Prompt Engineering and Advanced Techniques
- Conclusion: Master the art of prompting
The days of being amazed by AI tools like ChatGPT are behind usâCopilots are here, they work, and theyâre transforming how we operate, especially within the Microsoft 365 ecosystem. Now is the time to move beyond the basics, unlock their full potential, and boost productivity.
In this post, weâll explain how the technology worksâclearly and simplyâso you, as a user, can use it effectively with easy-to-implement advance prompt engineering techniques.
Behind âThe Whoaâ Factor
Iâm guessing youâre already interacting with AI tools like M365 Copilot at workâor if not, my hope is that by the end of this article, youâll be inspired to start. Consider these examples of prompts that showcase the power of these tools:
These are no longer time-consuming tasks. In my industryâsoftware development and AIâthe impact has been particularly transformative. Writing code is now faster, cheaper, and significantly more efficient. Technical leads, principal engineers, and software architects now work seamlessly with AI tools like ChatGPT or GitHub Copilot, achieving productivity levels once thought impossibleâeven matching the output of full-time development teams.
The real difference lies in knowing how to communicate effectively with these tools. The better you understand their limitations, strengths, and capabilities, the more you can boost your productivityâwhether you're an individual contributor or a manager. The possibilities are endless across industries and tasks, but only if you know how to use these tools well.
That said, this new paradigm of human-computer interaction is unlike anything weâve experienced before. For many, these advancements can feel overwhelming, raising critical questions like:
How do these tools âunderstandâ us well enough to deliver such nuanced and insightful responses?
For some, the inability to fully grasp this technologyâespecially concerns around security and handling âmy own dataââhas slowed adoption or discouraged them entirely.
Thatâs why the goal of this post is to demystify AI tools, with a focus on Microsoft Copilot (Bizz Chat), and to make them accessible to everyone by simplifying the technical concepts and streamlining their interaction.
Weâll explore three essential areas:
A high-level understanding of the science behind Large Language Models (LLMs).
The engineering principles that allow these tools to interpret and respond to complex contexts-all while ensuring robust security measures to protect your data.
The real-world impact of prompt engineering in boosting workplace productivity and innovation through easy-to-leverage techniques you can adopt today.
What Powers the âWhoaâ Moment: Large Language Models (LLM)
So, how do LLM models learn and work? Letâs break it down in a high-level and simple way.
First, we have something called training, which is basically how the LLM learns to do its job. The process happens in two major phases: pre-training and fine-tuning.
Phase 1: Pre-training âletâs call it âlearning from reading.â
In this part, the AI is reading massive amounts of text, like books, websites, and articles. No one is telling it what to do; itâs just picking up patterns on its own. It learns how sentences are built and how words relate to each other in different situations.Hereâs the technical bit:
The Neural Network, specifically based on the Transformer architecture, is tasked with predicting the next word in a sequence. For example, if the input is, âThe sun rises in theâŠâ, the model predicts the word âmorning.â This process, called next-token prediction, is repeated billions of times on vast amounts of text, improving the modelâs ability to understand and generate human-like text.The result of this phase is a foundational modelâa general-purpose AI that has learned the basics of language.
Phase 2: Fine-Tuning â Adding Human Expertise
Step two is where humans come in to fine-tune it. Once pre-training is complete, the model undergoes fine-tuning. This phase involves human experts (AI researchers and developers) who guide the model to improve its performance on specific tasks or domains.This fine-tuning ensures the model not only generates text but does so in ways that are practical and aligned with real-world needs. The result is a Large Language Model, fine-tuned to generate text that is practical, human-context-aware, and ready for integration into AI applications.
So, How Does the LLM Work in Practice?
When you interact with an LLM (Inference)âsay, by asking it to âwrite an email to schedule a meetingââhereâs what happens:
Input (Prompt): You provide a prompt, like âWrite an email to schedule a meeting.â
Prediction: The model takes the input and predicts the next word iteratively. For example:
- âDearâŠâ â âteam,â â âIâ â âhopeâ â âthisâ â âemailâ â âfindsâ â âyouâ â âwell.â
Response Generation: The process continues until the model completes its response.
Behind the scenes, the model is using the patterns it learned during pre-training, combined with the refinements added during fine-tuning, to generate coherent relevant text.
Hereâs how LLMs move from training to real-world use in applications like Microsoft Copilot
Once trained, these LLMs are deployed via platforms like Azure OpenAI, enabling developers to integrate AI seamlessly into applications through APIs. This Model as a Service (MaaS) approach brings the power of LLMs to tools like Microsoft Copilot, enabling them to transform workflows.
Azure OpenAI provides LLMs as MaaS solution, allowing developers to send prompts and receive responses in real-time. These APIs make it easy to integrate advanced AI capabilities into enterprise applications without managing the underlying infrastructure.
And thatâs it!
This is how LLMs are trained, fine-tuned, and deployed to power real-world applications. By leveraging Azure OpenAI's Model as a Service (MaaS) approach, these models are hosted, scaled, and made accessible via APIs, enabling seamless integration into enterprise tools like Microsoft Copilot.
The Engineering Behind AI Tools Like Microsoft Copilot: Delivering Secure and Contextual answers to User Inputs
Now that weâve explored how LLMs work and are capable of generating coherent, human-like text, the real question becomes: How do these tools connect contextually to your work while keeping your data secure? Letâs start with a task you might encounter at work:
Imagine youâre a project manager at a fast-paced consultancy. Youâve just wrapped up a series of meetings with a high-value client, and now you need to consolidate all communicationsâemails, Teams chats, and meeting notesâinto a clear summary to share with your leadership team.
You open Microsoft Copilot (Bizz Chat) and type a simple prompt:
đ©đœâđ» "Summarize all interactions with Client X over the past two weeks, highlighting key decisions, follow-ups, and unresolved issues."
Within seconds, Copilot springs into action:
It accesses only the data you have permission to view, such as project updates in SharePoint, meeting notes from Teams, and emails from Outlook.
Using Microsoft Graph, it retrieves relevant files, chats, and activities, grounding your prompt with contextually precise data.
The Azure OpenAI-powered LLM processes the information, generating a detailed summary tailored to your request.
Hereâs what you get:
đ€ "Over the past two weeks, Client X has finalized their product roadmap, requested additional prototypes, and expressed concerns about delivery timelines. Key follow-ups include clarifying the budget for phase 2 and schedulingâŠ.â
What makes this seamless experience possible? A robust interplay of secure data access and intelligent orchestration.
Microsoft 365 Copilot essentially acts as a Retrieval-Augmented Generation (RAG) system tailored for enterprise needs. It retrieves precise, relevant data from trusted sources within your tenant, enriches it with cutting-edge AI, and delivers actionable insightsâall while ensuring privacy and security.
Grounding for Contextual Relevance:
Before the LLM even generates a response, Copilot preprocesses your request, ensuring that only data relevant to your prompt is used. This step guarantees actionable and accurate outputs.Secure Data Handling:
All user prompts and responses are processed within the Microsoft 365 service boundary, adhering to enterprise-grade compliance standards. Your data stays encrypted in transit, and Copilot respects role-based access controls, meaning it only accesses what youâre authorized to see.LLM-Powered Insights:
Once the grounded data reaches the LLM hosted on Azure OpenAI, it leverages its advanced training to generate coherent, human-like responsesâwhether itâs summarizing interactions, drafting proposals, or creating action plans.
Integration into Everyday Tools (Copilots):
The output seamlessly integrates back into your Microsoft 365 appsâWord, Teams, or Outlookâready for you to refine and share with your team.
Now we see how LLMs provide clear and relevant responses, thanks to advanced AI systems like Microsoft 365 Copilot. With this tool available, the big question is: How do we make the most of it? The answer lies in learning how to prompt effectivelyâthis is the key to getting the best results and making a real impact, which we will address in the next section.
From Pen to Prompt - An Introduction to Prompt Engineering and Advanced Techniques
Learning the Basics of Prompt Engineering
What is Prompt Engineering?
Prompt engineering is more than just writing instructions for an AIâit's a foundational skill that determines how effectively LLMs deliver value. In todayâs AI-driven era, it has become a new form of literacy. Much like crafting a compelling narrative or coding with precision, designing well-thought-out prompts is crucial for guiding AI systems to produce relevant, accurate, and actionable outputs. Think of it as the bridge between human intent and machine understanding. A poorly crafted prompt can lead to vague, irrelevant, or incomplete responses, resulting in inefficiency and frustration. Conversely, a well-structured prompt unlocks the full potential of the AI, enabling it to provide tailored insights, meaningful solutions, and results aligned with your specific goals.
Best Practices
As with any skill, practicing is essential for learning. Let's look at an example to see how a good prompt differs from a bad one:
â Why the bad prompt
avoid fails?
Lacks Specificity: The prompt doesn't specify the objectives, methodology, or key components required in the plan.
Ambiguous Structure: Without a defined structure, the AI may produce an unorganized or incomplete response.
â How did we improve the prompt for a clinical trial plan?
Best Practice | How-to |
Be Specific | Clearly define the task and required sections. The prompt explicitly states the purpose: "Evaluate the efficacy and safety of Drug X in treating Disease Y." It also lists the specific sections to include, reducing ambiguity. |
Be Descriptive | Provide concise guidance for each section. Each section includes clear instructions on what details to provide. This helps guide the AI to generate comprehensive and relevant content. |
Double Down on Clarity | Reinforce instructions at both the beginning and end. The prompt starts with detailed requirements and ends by emphasizing how to handle missing information, ensuring the AI understands the task fully. |
Order Matters | Use numbered sections for logical flow. Numbered sections organize the response, making it easy to read and ensuring the AI generates the response in a coherent, structured format. |
Give the Model an âOutâ | Include fallback instructions for missing information to reduce the likelihood of hallucination. By instructing the model to respond with "Information not available" if any section cannot be detailed, we prevent the AI from making assumptions or fabricating data. |
Space Efficiency | Use concise language to optimize token usage. The prompt is straightforward, with clear instructions. Bullet points and numbered lists help convey complex instructions without redundancy, ensuring the model stays within token limits. |
short-acting insulin
and Y for type 1 diabetes
and see the difference in the outputsđDiving Into Advanced Prompt Engineering Techniques
Now that weâve covered the basics of creating effective prompts, letâs move on to advanced techniques. While the industry calls it prompt engineering, I like to call it prompt hyper-optimizationâa throwback to the days of fine-tuning ML models like XGBoost (I am sure you smiled, if you know you know). The idea is simple: refine how prompts are structured to get the best results. These techniques may feel a bit manual at first, but theyâre research-backed and can make a huge difference in performance. This knowledge helps me build/optimize AI systems/solutions that are transforming industries.
Iâll walk you through practical methods to improve your prompting skills and sharpen your mental models. Weâll start with a simple taskâusing Microsoft Copilot (Biz Chat) to organize tasksâand then explore advanced strategies to take your prompts (and results) to the next level.
# Baseline Prompt
**Hey Copilot,**
Help me organize my day. Look through my emails, calendar, and Teams chats, and create three lists of tasks:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
For each task, include:
- **Task Name**
- **Deadline (if any)**
**Note: Handle Missing Details**
- If any detail is missing or cannot be found, use **âInformation not available.â**
- **Do not** guess or fabricate data.
If you missed how M365 Copilot works or are curious about how Microsoft Copilot accesses your data before to start, check out the section above: The Engineering Behind AI Tools Like Microsoft CopilotâŠ
1. Role Prompting
Role prompting is powerful in its simplicity. By assigning a specific role to the model (e.g., âYou are a legal expertâ or âYou are a medical professionalâ), you guide the modelâs output to be more contextually accurate and specialized. This strategy is particularly effective for domain-specific tasks.
For example, if you add the following line to your prompt:
**Hey Copilot,**
You are an advanced personal productivity consultant with expertise in time management. Help me organize my day by looking through my emails, calendar, and Teams chats. Create three lists of tasks:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
For each task, include:
- **Task Name**
- **Deadline (if any)**
**Note: Handle Missing Details**
- If any detail is missing or cannot be found, use **âInformation not available.â**
- **Do not** guess or fabricate data.
you help orient the LLM toward providing more specialized, relevant solutions for managing your schedule.
2. Chain-of-Thought (CoT) Prompting
Chain-of-Thought prompting is powerful in its systematic approach. By asking the model to reason step-by-step through a problem, you ensure that its output is both logical and thorough. This technique is particularly effective for solving complex tasks, such as math problems or logical reasoning challenges, where clear reasoning leads to better results.
For example, if you add the following task and step-by-step reasoning
**Hey Copilot,**
Help me organize my day by analyzing my emails, calendar, and Teams chats. Think through each step carefully and categorize tasks as follows:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
For each task, include:
- **Task Name**
- **Deadline (if any)**
- **Recommended Time Slot(s)**
- **Documents/Files** (if relevant or available)
---
### **Step-by-Step Instructions**
1. **Gather & Categorize Internally**
- Review my emails, calendar, and Teams chats to identify all tasks.
- For each task, think through its urgency, importance, and context step by step.
- Decide which category each task belongs to (**Must-Do**, **Should-Do**, or **Could-Do**).
- Keep this reasoning to yourself; **do not** display your thought process in the final output.
2. **Incorporate Best Times & Focus Tips**
- Analyze my schedule and figure out the best time slots for each task.
- Consider strategies like blocking quiet time, grouping similar tasks, or scheduling reminders.
- Include these suggestions in the **Recommended Time Slot(s)** column of the final tables.
3. **Handle Missing Details**
- If any information is missing, label it **âInformation not available.â**
- Avoid guessing or fabricating details.
you guide the AI to process the problem systematically and produce well-structured, accurate results, making it ideal for detailed problem-solving scenarios.
3. Structured Chain-of-Thought (SCoT) Prompting
Structured Chain-of-Thought prompting builds on standard CoT by integrating defined structuresâlike loops, sequences, and conditionalsâinto the modelâs reasoning. This approach is particularly useful for technical tasks (e.g., code generation or structured document creation), where clarity and organization are crucial.
For instance, you could enhance the prompt by adding a clear, stable structure output (Table format):
**Hey Copilot,**
You are an advanced personal productivity consultant with expertise in time management. Help me organize my day by analyzing my emails, calendar, and Teams chats. Use a structured approach to ensure all tasks are carefully categorized and outputs are presented in a clear, organized format. Follow these steps:
---
### **Step-by-Step Instructions**
1. **Review Inputs Sequentially**
- Start by analyzing my emails, followed by calendar events, and finally Teams chats.
- For each input source, identify tasks systematically and note their context.
2. **Categorize Tasks with Conditional Logic**
- Assign each task to one of the following categories based on urgency and importance:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
- Use structured conditionals to make decisions (e.g., if a task has a deadline today, assign it to **Must-Do**).
- Ensure that each decision is logical and consistent.
3. **Collect Required Data**
- For each task, gather the following information:
- **Task Name**
- **Who Assigned It**
- **Deadline (if any)**
- **Recommended Time Slot(s)** (based on my schedule)
- **Documents/Files** (any relevant resources or links)
4. **Final Tables Only**
- Organize the output into **three separate tables**, one for each category:
- **Must-Do**
- **Should-Do**
- **Could-Do**
- Each table should have the following columns:
1. **Task**
2. **Who Assigned It**
3. **Deadline (if any)**
4. **Recommended Time Slot(s)**
5. **Documents/Files** (any relevant resources)
- Ensure the tables are complete and well-structured.
- Do **not** include any reasoning, intermediate steps, or commentary outside of the tables.
5. **Handle Missing Data**
- If any detail is missing or cannot be found, label it as **âInformation not available.â**
- Do **not** guess or fabricate information.
---
### **Final Output Format**
- Provide only the **three structured tables** in your final response:
1. **Must-Do**
2. **Should-Do**
3. **Could-Do**
- Ensure the output is clear, precise, and formatted consistently.
- No additional text, reasoning, or comments should appear outside of the tables.
Unlike standard CoT, which focuses on reasoning step-by-step, SCoT integrates defined structures like sequences and conditionals to handle tasks requiring organization and systematic decision-making. The structured tables ensure all data is presented in a clear, actionable format, minimizing confusion and maximizing usability.
5. Tree-of-Thought (ToT) Prompting
Similar to Chain-of-Thought, Tree-of-Thought (ToT) expands on CoT by allowing the AI to explore multiple reasoning paths at once, like a tree. This is particularly useful for complex decision-making processes where there may be several possible solutions. By letting the model evaluate multiple "branches" before selecting the best outcome, ToT is effective in problem-solving tasks, such as AI-driven strategy games or financial forecastingâ.
**Hey Copilot,**
You are an advanced personal productivity consultant with expertise in time management. Help me organize my day by analyzing my emails, calendar, and Teams chats. Use a Tree-of-Thought approach to navigate and evaluate different reasoning paths for categorizing and prioritizing tasks. Follow these steps:
---
### **Step-by-Step Instructions**
1. **Identify Multiple Reasoning Paths**
- For each task, explore multiple sources of information (emails, calendar events, Teams chats) to determine its urgency, importance, and context.
- Evaluate different branches of reasoning to decide whether the task belongs to:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
- Example branches to consider:
- **Branch 1**: Does the task have a strict deadline?
- **Branch 2**: Is the task directly related to a high-priority goal or objective?
- **Branch 3**: Can the task be postponed without significant consequences?
2. **Weigh and Compare Options**
- Compare outcomes from the different branches. For example:
- If a task is urgent but lacks a clear deadline, should it still be a **Must-Do**, or can it be a **Should-Do**?
- If multiple tasks have the same level of urgency, prioritize based on additional criteria like impact or dependencies.
3. **Collect All Relevant Data**
- For each task, gather the following information:
- **Task Name**
- **Who Assigned It**
- **Deadline (if any)**
- **Recommended Time Slot(s)** (based on my schedule)
- **Documents/Files** (any supporting resources or links)
4. **Use Tables to Present Findings**
- Organize the final output into **three separate tables**, one for each category:
- **Must-Do**
- **Should-Do**
- **Could-Do**
- Each table should have the following columns:
1. **Task**
2. **Who Assigned It**
3. **Deadline (if any)**
4. **Recommended Time Slot(s)**
5. **Documents/Files** (any relevant resources or links)
- Ensure the tables are clean, well-structured, and complete.
5. **Handle Missing Information Thoughtfully**
- If a branch of reasoning cannot provide all required details (e.g., no deadline is found), mark it as **âInformation not available.â**
- Do **not** guess or fabricate data. Move to another reasoning branch if applicable.
---
### **Final Output Format**
- **Final Output Tables**: Provide only the three structured tables as follows:
1. **Must-Do**
2. **Should-Do**
3. **Could-Do**
- Ensure all collected data is categorized and presented in the tables with the following columns:
- **Task**
- **Who Assigned It**
- **Deadline (if any)**
- **Recommended Time Slot(s)**
- **Documents/Files**
- Do **not** include additional reasoning, intermediate steps, or commentary outside the tables.
6. Graph-of-Thought (GoT) Prompting
While ToT focuses on linear reasoning, Graph-of-Thought (GoT) takes things further by allowing non-linear reasoning pathways. This method is particularly useful for tasks that mimic human thinking in a more dynamic and less structured way, such as complex research or brainstorming sessionsâ. Not an easy one but I use it for throwing ideas to copilot and find unrelated
conclusionsâŠwhat someone is the answer.
**Hey Copilot,**
You are an advanced personal productivity consultant with expertise in time management. Help me organize my day by analyzing my emails, calendar, and Teams chats. Use a Graph-of-Thought approach to explore non-linear connections and dynamically prioritize tasks. Follow these steps:
---
### **Step-by-Step Instructions**
1. **Map Interconnections Dynamically**
- Begin by mapping out interrelated tasks across emails, calendar, and Teams chats.
- Identify dependencies, overlapping priorities, and recurring themes. For example:
- Task A in an email may depend on a meeting in the calendar.
- Task B in Teams may align with a deadline-driven task in an email.
- Dynamically form a "graph" of tasks and their relationships.
2. **Navigate Non-Linear Pathways**
- Use non-linear exploration to analyze tasks in context. For instance:
- Explore how a taskâs priority changes based on related deadlines, assigned stakeholders, or project phases.
- Revisit earlier nodes (tasks) in the graph as new information emerges.
3. **Categorize Tasks Based on Relationships**
- Assign tasks to one of the following categories:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
- Leverage the graph to evaluate a taskâs overall importance by considering dependencies and connections.
- Example: A task that unlocks multiple related tasks should be a higher priority.
4. **Collect Relevant Data for Each Task**
- For each categorized task, gather the following information:
- **Task Name**
- **Who Assigned It**
- **Deadline (if any)**
- **Recommended Time Slot(s)** (based on the dynamic graph of priorities)
- **Documents/Files** (relevant resources or links)
5. **Use Tables to Present Findings**
- Organize the final output into **three structured tables**, one for each category:
- **Must-Do**
- **Should-Do**
- **Could-Do**
- Each table should have the following columns:
1. **Task**
2. **Who Assigned It**
3. **Deadline (if any)**
4. **Recommended Time Slot(s)**
5. **Documents/Files** (any relevant resources or links)
- Ensure the tables are well-structured and complete.
6. **Handle Missing Data Thoughtfully**
- If a task or node in the graph lacks certain details, mark it as **âInformation not available.â**
- Avoid guessing or fabricating data but use the graph to revisit connected nodes for additional insights.
---
### **Final Output Format**
- **Final Output Tables**: Provide only the three structured tables as follows:
1. **Must-Do**
2. **Should-Do**
3. **Could-Do**
- Ensure the output is clean, precise, and dynamically informed by the graph of interconnections.
- Do **not** include intermediate reasoning or graph structuresâjust the categorized tables.
7. ReAct Prompting
ReAct goes beyond reasoning by enabling models to generate both reasoning traces and actions simultaneously. This approach is particularly suited for interactive tasks where reasoning and decision-making are closely linked, such as customer support automation or conversational AI systems managing dynamic, multi-turn dialogues. It works better when you interact with the system as a connected component, but you can mimic
it with traces.
**Hey Copilot,**
You are an advanced personal productivity consultant with expertise in time management. Help me organize my day by analyzing my emails, calendar, and Teams chats. Use a ReAct approach to dynamically reason through and perform the following steps:
---
### **Step-by-Step Instructions**
1. **Reasoning and Acting in Parallel**
- For each task identified, think through the context and priority while simultaneously determining the required actions.
- Example:
- **Reasoning Trace**: "This email contains a request for a meeting tomorrow, so itâs urgent."
- **Action**: "Mark it as Must-Do and suggest a time slot based on the calendar."
2. **Dynamic Decision-Making**
- Adjust priorities and actions as you navigate through emails, calendar, and Teams chats.
- For multi-turn interactions (e.g., tasks spanning multiple emails or chats), update reasoning and actions in real-time.
- Example: If a Teams message references an email with additional context, update the reasoning trace and action accordingly.
3. **Categorize Tasks Dynamically**
- Assign tasks to one of the following categories based on the reasoning and actions:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
- Reevaluate categories dynamically if new information arises during analysis.
4. **Collect Relevant Data for Each Task**
- For each categorized task, gather the following information:
- **Task Name**
- **Who Assigned It**
- **Deadline (if any)**
- **Recommended Time Slot(s)** (based on real-time reasoning and actions)
- **Documents/Files** (relevant resources or links)
5. **Use Tables to Present Findings**
- Organize the final output into **three structured tables**, one for each category:
- **Must-Do**
- **Should-Do**
- **Could-Do**
- Each table should have the following columns:
1. **Task**
2. **Who Assigned It**
3. **Deadline (if any)**
4. **Recommended Time Slot(s)**
5. **Documents/Files** (any relevant resources or links)
6. **Handle Missing Data in Real-Time**
- If missing details are identified during reasoning, attempt to gather more context by revisiting related messages or tasks.
- If no additional information can be found, label the detail as **âInformation not available.â**
---
### **Final Output Format**
- **Final Output Tables**: Provide only the three structured tables as follows:
1. **Must-Do**
2. **Should-Do**
3. **Could-Do**
- Ensure the tables are dynamically informed by real-time reasoning and actions.
- Do **not** include intermediate reasoning traces in the outputâonly the final categorized tasks and their details.
8. Chain-of-Verification (CoVe) Prompting
To reduce hallucinations, Chain-of-Verification (CoVe) has the model generate an initial response, then verify it through targeted verification questions. By employing this verification loop, CoVe improves factual accuracy and reduces the chance of misleading outputs, especially in tasks like fact-checking or detailed QA systems.
**Hey Copilot,**
You are an advanced personal productivity consultant with expertise in time management. Help me organize my day by analyzing my emails, calendar, and Teams chats. Use a Chain-of-Verification approach to ensure accuracy and reliability in the categorization and details of tasks. Follow these steps:
---
### **Step-by-Step Instructions**
1. **Generate Initial Response**
- Review my emails, calendar, and Teams chats to identify tasks and assign them to one of the following categories:
1. **Must-Do (Urgent/Important)**
2. **Should-Do (Important but not Urgent)**
3. **Could-Do (Flexible)**
- For each task, collect:
- **Task Name**
- **Who Assigned It**
- **Deadline (if any)**
- **Recommended Time Slot(s)** (based on my schedule)
- **Documents/Files** (relevant resources or links)
2. **Verify the Initial Response**
- For each task, create a set of targeted verification questions to confirm accuracy. For example:
- Is the task description consistent across all related messages or entries?
- Does the assigned deadline match other contextual details (e.g., calendar or email)?
- Are the documents/files directly relevant to completing the task?
- Answer these questions to validate the task details.
3. **Refine the Response Based on Verification**
- If any discrepancies or inaccuracies are found during verification, update the task details accordingly.
- Example: If a deadline in the email differs from the calendar, update the task to reflect the correct deadline.
4. **Use Tables to Present Verified Findings**
- Organize the final output into **three structured tables**, one for each category:
- **Must-Do**
- **Should-Do**
- **Could-Do**
- Each table should have the following columns:
1. **Task**
2. **Who Assigned It**
3. **Deadline (if any)**
4. **Recommended Time Slot(s)**
5. **Documents/Files** (any relevant resources or links)
5. **Handle Unverifiable Information**
- If any details cannot be verified, clearly label them as **âInformation not verifiable.â**
- Avoid guessing or fabricating information.
---
### **Final Output Format**
- **Final Output Tables**: Provide only the three structured tables as follows:
1. **Must-Do**
2. **Should-Do**
3. **Could-Do**
- Ensure all task details are verified and accurate.
- Do **not** include the verification questions or intermediate stepsâonly present the final, verified task tables.
There are many other techniques worth exploring. Personally, I often use Program-of-Thought (PoT) Prompting and Chain-of-Code (CoC) Prompting, but diving into those would be beyond the scope of this article.
If youâre curious and want to learn more, I highly recommend checking out the paper 2402.07927v1. Itâs a fantastic resource for deepening your understanding of these advanced methods!
Conclusion: Master the art of prompting
After writing this article, one big takeaway stands out: the value of prompting isn't about getting it perfect, but about refining the process. It's all about learning, trying things out, and using LLM to broaden our views. With every well-thought-out prompt, we're not just looking for answersâwe're creating a way to understand things better.
Prompting is more about how it makes us think, question, and evolve, rather than just the tech itself. It's a reminder that the best results come from mixing structure with exploration, turning AI into a real partner in thinking.
Honestly, my productivity has skyrocketed: 10x in coding, 5x in consuming information, and 2x in organizing and managing meetings at work. The best part? Iâm still learning every dayâdiscovering new tips, refining prompts, and exploring new ways to optimize.
Itâs such an exciting time! Donât miss outâstart developing these essential skills now and unlock AI tools potential at work !
Did you find it interesting? Subscribe to receive automatic alerts when I publish new articles and explore different series.
More quick how-to's in this series here: đđ§ Azure AI Practitioner: Tips and Hacks đĄ
Explore my insights and key learnings on implementing Generative AI software systems in the world's largest enterprises. GenAI in Production đ§
Join me to explore and analyze advancements in our industry shaping the future, from my personal corner and expertise in enterprise AI engineering. AI That Matters: My Take on New Developments đ
And... let's connect! We are one message away from learning from each other!
đ LinkedIn: Letâs get linked!
đ§đ»âđ»GitHub: See what I am building.