Trialled by Daniel Zuboff
Strategy at Loomery
One of the more stunning graphs from OpenAI’s GPT 4 paper — The AI’s performance (percentile) on a range of human tests. Blue bars are GPT 3.5, green bars are GPT 4. You can see the leap in the abilities of the latest large language model.
I’ve been playing around with ChatGPT 4, trying to extract the best performance possible from the tool with regards to content creation. My goal with the following prompts was to generate a high-quality article worthy of Loomery’s blog.
A methodology I found useful was to first ask the AI to generate a list of 10 core ideas, around which articles might possibly be written, and then selecting my favourite out of these topics to generate a list of catchy titles from. I then selected the title that seemed the most interesting to me, and asked GPT to generate a few potential outlines for an article with that heading before picking my favourite overall structure. Finally comes the process of asking GPT to write the full article based on that structure, giving it feedback to coax the maximum quality possible from the output.
First I wanted to compare the utility of the previous GPT 3.5 model with the latest model, so I asked them both the same initial prompt, and selected my favourite out of the three responses I generated with each model.
Prompt:
Give me a list of 10 powerful use-cases for GPT in a startup that focuses on lean and agile digital product development for websites and apps, and business strategy work. The list of use-cases should be creative, and the answers you give could be used as starting points for entertaining blog posts drawing thousands of potential customers to the startups website and social media.
You can see a few things I did to try to generate the best quality outputs that I could. I tried to give the model a lot of context to work with; I specified exactly what I required from it, and was very optimistic in my description of the quality of ideas the model should produce. Here are the outputs: