Shared Document

Promptology Document

Video Walkthrough

Promptology Walktrough

Inspiration

This tool was not designed for Coda’s AI at Work Challenge. It is a tool that existed long before the challenge was announced and has been delivering AI productivity for many months. It existed before Coda AI was in alpha testing. It originally utilized OpenAI APIs through a custom Pack.

This community article is partly responsible for its existence and earlier influence from Tesla and SpaceX. This version has been streamlined for open submission to the challenge, but it is no less powerful than the one used by me and my team.

As Coda users, we are quick to focus on building “the thing”. The relevance of Promptology with the AI At Work Challenge parallels the need to scale AI productivity to build many things, not just solve a specific problem with AI. The Promptology Workbench is simply the “thing” that helps you build many other “things”. For this reason, it’s essential to think about the extended productivity this tool can produce. Promptology is to AI productivity what compound interest is to Bank of America.

I certainly want to win the challenge, but I’ve already won in a big way. The Codans have ensured that we are all winners.

What it does

Promptology is about one thing - mitigating the counter-productive challenges of AI prompt development.

How we built it

Purely Coda and Coda AI

Challenges we ran into

As early adopters of AI, we have grand visions of escalating our work output. There’s no shortage of media outlets and multi-message Twitter posts that have convinced the masses that AI makes digital work a breeze. Reality check: it doesn’t.

ChatGPT and Coda AI users typically experience poor results because successful prompts are not as easy to create as you first imagine. How hard can it be? It’s just words. The reality is that it is both hard and complex, depending on the AI objective.

Two aspects of prompt development are working against us.

  1. Prompt Construction - most of us “wing” it when building prompts.
  2. Prompt Repeatability - most of us are inclined to build AI prompts from scratch every time.

Getting these two dimensions right for any Coda solution takes patience, new knowledge, and a little luck. I assert that ...

The vast productivity benefits of AI are initially offset and possibly entirely overshadowed by the corrosive effects of learning how to construct prompts that work to your benefit.

The very nature of prompt development may have you running in circles in the early days of your AI adventure. You’ve probably experienced this frustration with ChatGPT or Bard. It’s debilitating and often frustrating — like playing a never-ending whack-a-mole game.

Accomplishments that we're proud of

We use Promptology in all these use cases.

AI for Personal Productivity

One of the challenges with AI is creating prompts that work well and deliver benefits repeatedly and quickly. Having a single prompt warehouse where I can locate a prompt rapidly that I knew I had previously created is a huge personal benefit. The automated prompt-type classifier in Promptology can identify prompts that are designed for personal use.

AI for Research & Analysis

At my company, we perform dozens of R&D assessments to understand new software, manufacturing, and transportation technologies. Promptology has streamlined this work, and while no AI is perfect, the consistency achieved continues to create compelling and helpful analyses that we can quickly expand on with inline Coda AI features. This prompt example demonstrates how we accelerate work and save about a half hour every time we restore this prompt, augment it slightly, and run it.

AI for Product Development

Often we need to create software code that may be new to the team. Promptology provides us with a success pattern to manage a growing collection of software engineering templates that can be called upon without any setup or re-crafting the prompts. The repeatable nature of Promptology lures developers to be lazy and benefit from a process where much of the tedious effort is eliminated.

AI for Marketing & Selling

We also use Promptology as the driver of our social media monitoring solution (also built in Coda). It was designed with prompts pulled from Promptology through Coda’s Cross-Doc Pack. Previously, we used the MeltWater platform ($4500/yr), which has now been replaced with Coda IA features fed to it by Promptology.

AI for Business Operations

Almost every aspect of doing business will eventually be touched by AI. This means prompting skills will need to expand greatly, and Promptology plus Coda AI is precisely the tool that can impact everyday work in a positive way. At Stream It, we use Promptology to build AI features for sales auditing, CRM inbound contacts, and outreach, as well as testing protocols for our highway analytics systems.

What we learned

LLMs speak before they think. The challenge is to get them to think before they speak. Prompt development systems and AI workflows make this possible.

What's next for Promptology

Scale. Finding new ways to apply this approach to support an entire enterprise.

Built With

  • ai
  • coda
Share this project:

Updates

posted an update

Mega-Prompts

AI prompts, thus far, have been simple for the most part. A couple of sentences. Maybe a data grid is included. Sometimes a few rules, such as, don’t format the output with numbered lists.

Something is happening that belies this general prompting pattern. Prompts are upsizing at an astounding rate. This new type of prompt contains a lot of detail, instructions, and, perhaps, domain-specific thought processes. Large prompts are not new, but they have been constrained by token costs. Using large prompts to embody elaborate domain expertise is not something you see often.

This article dives deep into the emerging realm of mega and hyper-prompts.

I believe hyper-prompts are tomorrow’s knowledge objects. They will be curated, shared, subclassed, injected with real-time data, and even licensed as proprietary executable “code”. Coda, as it turns out, is an ideal place to build and test hyper-prompts. Promptology provides the underlying framework for building reusable hyper-prompts.

Example Hyper-Prompt Example Hyper-Prompt

Log in or sign up for Devpost to join the conversation.

posted an update

Generating Coda Pack Code

On a hunch, today I decided to test the hypothesis that a well-constructed prompt could generate a functional Coda Pack that would build and execute without modification. Almost within minutes of providing a reasonable Pack example and three prompt adjustments, the code was generated, built, and executed correctly.

This is just one very simple test, but it demonstrates that with a suitable prompt, Coda AI can fully provide code generation in the flavor and style that the Coda SDK requires.

Promptology-Generated Coda Pack

Log in or sign up for Devpost to join the conversation.

posted an update

ChatGPT popularized AI but [Google's] Gemini May Be a Much Bigger Milestone

It won't be long before Coda AI will support more than a single LLM (large language model). The evidence is clear - something even bigger than OpenAI's 2022 landmark launch is going to sneak up on us soon.

When that happens, we'll need to think about prompts in the context of different models. Some will work unchanged across different models. Many won't.

Even today, subtle prompt differences exist between OpenAI's GPT-3 and GPT-4. It's difficult enough creating high-performing models, let alone trying to remember the nuances of different models.

Promptology has been designed with an eye toward a future where Coda AI supports multiple models and uses its AI capabilities to provide automated prompt variants targeting specific models. This capability has not been exposed in the Promptology UI [yet], but you can see evidence of it in hidden fields (ex. Prompt Guide).

You are free to use Prompt Guide for any purposes you might have. The premise of this field is to create a cascading review of the entire prompt as it is sequentially constructed from each prompt component. Each stage in this progression performs a prompt review of all prompt guidance to that component point. It's like asking the question -

How would you build this next part of the prompt given all that has been provided for all components up to this component?

We must consider the fluidity of prompt dependencies as we inch closer to AI systems that are model agnostic.

Log in or sign up for Devpost to join the conversation.

posted an update

Prompt Reusability - Keeping it Simple

One of the things my team demanded was extreme simplicity to leverage other team members' prompts. Promtology achieves this by making it one click to recall a prompt that can then be modified and tested as a new prompt. This one requires only two small changes.

Important Tip: Note the use of color and bold formatting in the prompt. Coda makes this possible, allowing prompt creators to format the text so that other users can understand what might need to change when reusing a prompt.

Prompt Reuse

Log in or sign up for Devpost to join the conversation.

posted an update

Search Queries and AI Prompting

In 1999 Google Search made it clear that as the number of information artifacts increased, the more discrete humans would have to become to find anything on the Interwebs. In 1999, we struggled to learn how to search. Some of us got good at it. Some, not so much, and many still struggle today. LetMeGoogleThatForYou.com is still a thing.

In contrast to modern search, artificial general intelligence is based on ALL the information artifacts plus ... ALL the ways all the information artifacts could ever possibly be used, remixed, joined, and analyzed. This is a massive difference and certainly a giant leap forward in information technology.

To say that prompt engineering is about discrete and articulate queries is almost laughable. Have we ever faced a more critical moment in history where we must choose words more wisely?

Over the years, have modern search systems become more forgiving or poorly crafted queries? Nope. AI queries will likely follow the same arc.

If anything, modern search systems have become more sensitive, producing less helpful outputs year over year. And this is why those who need to find stuff are turning to conversational UIs and UXs that can sustain context, perhaps nudging the productivity curve in their favor.

While search and AI productivity share similar challenges, prompt engineering subtly tosses a new monkey into the wrench. The slightest change in a prompt can significantly change outcomes to make them almost unrecognizable. Unfortunately, this - and this alone - suggests we need to shape prompts like we craft legal documents.

With great power comes the responsibility to advance future information systems that leverage this technology. We must tame prompts through well-understood change management processes.

Log in or sign up for Devpost to join the conversation.