Investing in power level ChatGPT skills ensures more consistent, effective and higher quality LLM outputs. They involve moving beyond simple prompts to iterative interactions with your chosen LLM.
The tips and tricks we share below are not restricted to ChatGPT. They will work just as well with Gemini, Claude, Grok and so on.
We’ll focus on four category areas:
- Principles: general principles to bear in mind when working with LLMs
- Context engineering: the importance of giving LLMs context and how to do this effectively
- Better inputs, better outputs: Using voice, images, and editing canvases
- Repeat tasks: using tools like Custom GPTs and projects to save time on routine actions
Let’s get into it now.
This topic is also available as a workshop.
Learn more about Hustle Badger for Business
General principles
LLMs work best when you keep some core characteristics in mind, and work through tasks sequentially, optimising each step as you go. These are general principles you should apply to every task to get the best outcome.
Below we’re going to cover
- Knowing what LLMs don’t know: internalizing that they quickly default to the mean without specific context
- Getting options: don’t ask for answers, ask for options
- Making a plan: force it to work step by step to understand assumptions and viability
- Working through steps: iterating steps to get to best outcomes
- Playing ping pong: becoming the interface between specialist tools that execute work and the LLM
Knowing what they don’t know
The first thing to understand is that while LLMs ‘know everything, they know nothing’.
Putting it another way, LLMs contain vast amounts of digital knowledge. But they do not know what you want unless you tell them.
This leads to some interesting dynamics:
- Answer mentality leads to average results: If you ask them for an answer, you will receive a generic answer, since they lack context, and will default to the average.
- Dialogue mentality will improve your thinking and their outputs: if you encourage them to act as a thought partner, their encyclopedic knowledge helps stress test your ask and gives them critical context
As a result, using ChatGPT as a sparring partner helps augment your thinking, improve your asks and outputs, and leads to dramatically better results than simple ask / generic output dynamics.
We will be focusing on this interplay throughout the rest of this article. This interplay between you and the LLM works best when you break down your ask into sequential steps, and optimize each step as you go, with your LLM as sparring partner. Let’s get into that now.
Always ask for options
The best way to force your LLM to deviate from generic responses is to ask for options. Avoid closed single option questions and any phrasing which would generate ‘this / that’ answers from a human.
Example:
❌’Tell me how to do….X’
✅’Give me some options which would help with this problem….’
You can additionally seed the LLM with some ideas if you have them, but be careful to signal that these are questions and context you’re providing about what you’re thinking. This isn’t that hard, you just need to add question marks and be clear about your ask.
Hustle Badger case study: Migrating our website
Context:
Hustle Badger’s website crashed when our weekly email created traffic spikes. Hustle Badger needed to migrate server provider to cope. But fixing one issue broke something else.
Our certificate functionality, which relied on pdfs, was blocked by our new server provider. People like getting their certificates when they finish courses! So we asked ChatGPT for help.
Ask:
OK, so it seems like Cloudways (server provider) is not going to allow any pdf creation library to run on their servers.
What other options do we have here?
* Migrate to another server provider?
* Spin up a standalone certificate generation service on another server?
This technique works for pretty much every use case. It could be how to get from one place to another, or creating brand options for a prototype as you vibe code.
Creating a plan
Once you have your options, have understood the problem space, it’s time to select a high level option, and ask the model to make you a plan. Only by understanding how you would implement something can you understand if it’s feasible.
Equally well, it’s important to make ChatGPT break down its reasoning into steps, so that you can see where it’s making assumptions, and stress test its idea of the steps. Any time you ask ChatGPT to do a thing, it’s going to make some assumptions, and may default to the average.
Forcing it to work step by step allows you to see and critique steps to get to an optimal output, and sets up the next stage, which is you giving feedback and context to optimize each stage of the plan.
The other benefit of this is what once you’ve worked through this process, you have your own project plan to follow.
Hustle Badger case study: Migrating our website
Context:
We decided that building a microservice on a different server that could create and deliver certificates for our users felt like a good option.
Ask:
[ChatGPT had given us some example code]
Ok, let’s not get into the code just yet. Let’s focus on the plan. Outline the steps of the plan that make sense really simply. Then we’ll walk through them step by step later.
Working through steps
Once you have your high level plan, you’re ready to walk through step by step. Working through in this way allows you to give the LLM more context, ask for more trade offs, and refine your output. You should be working through each step, asking for complexities, trade offs, and refinding as you go.
This allows you to make a decision on to move forwards at every stage, and iterate the model’s output until you have clear and viable instructions.
This all comes back to the core principle we discussed at the top of this section: namely iterating and providing context sequentially to get the best output. Tackling complex tasks in bitesize chunks helps keep the LLM on a tight rein, and gets much better results. Once again you can seed ideas and push it to do better thinking.
Opportunity Solution Trees can be configured as issue trees
Case study: Issue tree for a dating website
Context:
We were creating dummy material to teach. We wanted to take a generic product example, and generate an issue tree that we could show as an example.
Ask: [For our hypothetical dating app]
‘This issue tree should have 3 levels of issue: on a parent child, child structure. Let’s define 3 parent level issues together, and work sequentially down the tree.’
‘Let’s take a very product focused approach and think about our first level of issue: Why are users are getting fewer matches on our dating app? Overall active users being down has to be one problem. Density of users has to be another (similar people, location, attractiveness, etc.). What else might be a problem?
Playing ping pong
When you combine these principles and this way of working: sequential, iterative, you wind up working as though you were playing ping pong.
What you are doing is like batting the ball back and forth, between whatever specialist tool you’re using to do the work (a server environment, a vibe coding tool, etc), and the LLM, to work through execution stages.
Internalizing this technique and thinking of yourself as the interface between the two tools, or the knowledge node between them is hugely effective.
Learn more with Cohorts
Context engineering
Giving LLMs context is key to help them understand your specific use case, and what constitutes a good answer.
When thinking about context, you don’t necessarily need to give them everything, but you do need to give them the best possible context. There’s techniques to help with this.
We’ll talk through 3 context engineering techniques:
- Context loading: uploading examples and asking it to analyze, over explaining
- Context pulling: set context and ask for clarifying questions to help complete the last
- The RACE format: a powerful prompt useful in agent or other prompt only settings
Context loading
It’s easy to jump straight into a topic and feel like you’re providing context, when many basics are unknown.
One technique which can help walk the LLM step by step towards what you already know, is to ask it about the topic you’re working on. This works best when you have an online resource which you can share.
Example:
Context: you want to build an adapted version of a food delivery site, like Deliveroo.
Prompt: I’m sharing a website with you. Tell me about this company. https://deliveroo.co.uk/
[Wait for answer]
Prompt: Come up with 3 options to improve the conversion of this site.
Having the LLM research itself is faster and often more comprehensive than you supplying information. You can correct any errors, and you can refine its understanding by having it critique or ask questions about what it sees.
The above example is significantly faster than asking the LLM to act as a UX designer, and then explaining Deliveroo. It’s best when doing something relatively quick and simple, but where the LLM might deviate from what you need.
Context pulling
Another effective technique is to ask the LLM to act as a researcher or interviewer, and pull all the information it needs from you.
This is highly effective because this leans on the LLM as an intelligent sparring partner, and drags more out of you. You always know more than you think, or might instinctively communicate.
It’s important to avoid the LLM’s inbuilt sycophancy bias, and to create forcing functions so that it does the work. In order to do that you need to give it broad, but very minimal overarching context, and instruct it clearly to work step by step, recalling context it has gathered as you do.
You can use the below framework to keep your prompt effective:
- What: what you want the LLM to do i.e. ask iterative questions
- How: how you want it to do it i.e. one by one, remembering context
- Output: what the end product of the conversation should be i.e. a spec
- Check: ask for the LLM to clarify that it has understood the task
Example:
Prompt:
‘Ask me one question at a time so we can develop a thorough, step by step specification for this idea. Each question should build on my previous answers, and our end goal is to have a detailed specification that I can hand off to a developer. Let’s do this iteratively and dig into every detail. Remember, only one question at a time. Say ok, and I’ll tell you the idea.’
This requires more time and more effort on both sides, so it makes most sense when you’re tackling larger, more complex topics. However it requires less upfront thinking on your side, and it often forces you to think through topics in detail as the conversation progresses. This is also a great use case for using the voice mode, and dictating to the model.
If you don’t know the answer to a question, you can ask the LLM to step in and answer it itself (ideally with options), simply by reflecting the question back to it.
One thing to watch out for is that the LLM will answer questions until you stop it. There’s a few ways to manage this
- Limit questions: i.e. ask it for its top 5 most important questions
- Let it run: you can always cut it off and set it the next task
Knowing when to stop comes down to judgement – but usually if there are pre-set patterns for common tasks, or the questions feel repetitive, it might be better to try and fail, as you’ll learn more.
This is a very powerful technique, especially when combined with dictation. It’s especially helpful when you’re vibe coding and need to get up to speed on topics you’re less comfortable with or manage the complexity of building.
The RACE prompt
So far we’ve walked through scenarios where you can work iteratively with an LLM, and dialogue back and forth. But this isn’t always the case, especially when building agents.
The RACE prompt is an effective structure when you need a prompt which covers all bases and has a high chance of working well.
RACE stands for
- Role: the persona the LLM should adopt
- Action: what you want them to do
- Context: key information which will help the LLM do it well
- Example: what good looks like
Role
Here you tell the LLM what persona it should adopt, i.e. a product manager, or an email writing assistant. You should think carefully here about whether it has a personality, and how it performs that role. Think of this as a job description and write down what your ideal hire would look like and perform.
In the above example, you can see that we’ve created the role of email writing assistant, and given the assistant a direct, to the point personality.
Action
Here you describe what you want it to do. Think of this as a specification. You should include any constraints and must haves. List them clearly.
In the above example, you can see that we’ve insisted on a Bottom Line Up Front format, and asked it to avoid hedging.
If you’re unsure about any of these inputs, try context pulling as a technique, plus using options, to work with an LLM to design this prompt.
Context
This is a free form / free flow field where you provide your LLM with any additional information it might need to give you better answers. Remember that any hard constraints or persona information should not be repeated or inserted here. This section is for additional environmental context.
In the above example, you can see that we’ve explained how emails can pile up in inboxes, and that it’s culturally acceptable for the LLM to perform according to its personality and instructions.
Example
This is possibly the most important part of the prompt: examples of what good looks like. The LLM is much more likely to perform well if you include these. Any real artefacts such as PRDs or specs, in a PRD / spec context will improve outputs enormously.
In the above example, you can see we’ve taken a typical, very polite British email asking for a report and setting and soft deadline of tomorrow; and transformed it into a direct message which states that the report is need tomorrow, without any of the polite intros.
Don’t feel constrained: you can give multiple examples. Try to structure the example: in an agent setting be clear about what inputs it will get, and which outputs you’d like.
Finally, you can start from the examples, and work backwards. If you know what good looks like, and have seen it, or created it yourself, you can work with an LLM to define the role and action steps, plus generate more examples to feed your end prompt.
Techniques like dictation, snapshots and in-line editing can save you a ton of time.
We’ll be going through
- Talking to LLMs: why you should adopt voice dictation
- Visual aids: show, don’t tell
- Canvas: move what you’ve been working on into an editable canvas, tweak and export it
Talking to LLMs
The benefits of talking, rather than typing to LLMs are
- Speed: 2 to 3 times faster than the average typing speed to convey information
- Processing: LLMs are excellent at processing unstructured data and working out what you actually want
- Context included by default: you tend to narrate the scenario alongside the asks since you’re used to human to human communication norms
There are two voice modes available for talking to ChatGPT:
- Advanced voice mode (sound wave symbol): you talk to it, and it talks back to you. Fun if you like talking to LLMs, inconvenient often in a work setting
- Dictate (microphone symbol): you leave ChatGPT a long voice note, it transcribes it, and it writes back to you. Can be quicker, easier and more discreet to scan its written responses, especially if you’re reading back previous chats
You can also use the dictate mode to transcribe email content, evaluation notes, whatever.
Tip:
In order to make this effective and stop the LLM transcribing every time you pause, it can be helpful to set a trigger word, so ‘transcribe everything I’ve said when I say Pineapple’.
This is especially useful if you’re thinking through a difficult problem on a walk or similar, want to capture your thoughts, and ask for feedback / a synthesis etc later.
Be aware that the context window for audio inputs on ChatGPT is ~30 mins. That means you have time to dictate quite long but not very long notes.
LLMs are excellent at reading and processing visuals. Rather than explaining a situation, either by typing it out or voicing it over, it can be quicker and easier to just send an image.
Examples:
Shopping
In the above image, we’ve photographed a book shelf and asked for graphic novel recommendations which aren’t already there. It read the shelf at speed, cited its findings and returned a list of options.
Code
LLMs tend to be extremely effective and quick at finding code errors. One engineer uploads over 1000 characters of code to ChatGPT, and finds that the API call is failing because of a typo – ‘s’ has been added to the end of a string by accident.
This can really help in complex situations or where you need to dump a lot of context, quickly. It’s also great for analytics.
Canvas
So far we’ve talked a lot about what to put into ChatGPT, but less about how to get your outputs out of chat.
Canvas is a functionality which allows you to move the chat into an editable format, from which you easily export whatever you’ve been creating with ChatGPT (a spec, an article, etc). All you need to do is ask.
Canvas also allows you to easily edit (or have the LLM edit) individual sections of text or analysis, without having to regenerate the whole thing. You can regenerate individual sections or just do it yourself.
While working with LLMs is fun, just a reminder that sometimes it is just quicker to make the change yourself, 100% perfect outputs are rare.
Once you’re happy with your document, you can export it into .docx, .pdf or markdown (md).
Repeat tasks
Custom GPTs
Custom GPTs help you save time on repetitive tasks. Any time you’re repeatedly going to ChatGPT with the same problem, it’s a candidate for a Custom GPT, which is set up to do that thing, every time, without you having to input all the same steps.
There is a market for Custom GPTs, but we recommend setting up your role.
Tip:
Setting up a Custom GPT needs a thorough and effective prompt. The RACE prompt framework is great for this.
Projects
Projects is a ChatGPT feature sandwiched between chats and GPTs.
You can file important chats here, and organise them, in order to avoid repeating context you’ve already shared.
You simply go back to that particular chat and pick up where you left off, as it already has all the previous instructions, context and so on.
You can also proactively create these for projects, clients, etc. You can also add files to every project, which is a powerful feature. This might include call transcripts, artefacts or code strings you’ve created, and so on.
Wrap up on ChatGPT Power Skills
LLMs amplify skilled thinking, but they don’t replace it. The highest returns come from structuring the conversation: requesting options, a plan, step by step optimizations and sharing concrete artifacts. Trial dictation and the ping pong method – you’ll be surprised how far you get.
More Hustle Badger Resources
Articles
Cohorts
On demand courses
[Bitesize]
[In depth]
Youtube
FAQs on ChatGPT Skills
How do I know when the model has enough context to proceed?
If the model’s follow-up questions stop being high-value and start asking minutiae, you have enough. Another signal: you can pick a clear next action and the model can write a plan for it. Stop when clarifying questions repeat or become trivial.
When should I use voice versus typing?
Use voice for ideation, stakeholder interviews, and long-form thinking. Use typing for short, precise prompts or when you need to paste code or structured data. If you record ideas while walking, use a trigger word to tell the model when to summarize.
How do I prevent the model from giving outdated UI instructions?
Ask it to re-check and to show the source of its steps. If the model suggests clicks you do not see, paste a screenshot of the UI and ask for an alternate workflow. Treat the model’s UI guidance as hypothesis, not instruction.
Are saved prompts secure to use across teams?
Saved prompts are useful but treat them like templates that may need sanitization. Remove sensitive data before sharing. Maintain a versioned prompt library and codify acceptable usage and data handling practices.
When is a custom GPT worth building?
Build one when you run the same transformation many times and need consistent output. Examples: meeting-note extraction, show-note generation, client email templates. Rank candidates by frequency and time saved.