Here are the show notes for your episode, written in the same style as the example image:
Welcome to Episode 46 of The Hockey Stick Show! I’m Miko Pawlikowski, and in this episode, I had the chance to chat with Dan Cleary, co-founder of PromptHub. We dove deep into the world of prompt engineering — what it is, why it matters, and how it’s evolving alongside rapid advancements in AI.
Understanding the Rise of Prompt Engineering
Dan explained how PromptHub emerged from real challenges building LLM-powered features into traditional software. Versioning, collaboration, and the non-deterministic nature of LLMs highlighted the need for a dedicated, GitHub-like platform for prompts.
Prompt engineering, it turns out, isn’t just a trend — it’s about treating prompts like first-class citizens in software development.
Why PromptHub, Not GitHub for Prompts?
Dan shared why GitHub alone doesn’t cut it for managing prompts:
Different update cycles: Prompts evolve faster than code.
Non-technical collaboration: PMs and domain experts need to iterate on prompts without touching code.
Testing & deployment: PromptHub includes tools tailored for LLM workflows, like testing prompts across models and environments.
Prompt Engineering as a Core Skill
We explored the evolution of prompting — from copy-paste templates to a nuanced skill essential in production applications. Dan emphasized:
Practicing prompting improves your model intuition.
Strong prompts are crucial in LLM-integrated products.
Prompt engineering may not be a standalone role forever — but it will remain a vital skill for PMs, engineers, and AI builders.
The Myth of the Jobless AI Future
Dan addressed fears about AI replacing jobs:
“If engineering becomes 10x easier, we won’t have 1/10th of the engineers — we’ll have 10x more code.”
AI will empower smaller teams to achieve more, not eliminate human roles. The key? Learn the tools, become harder to replace.
Prompting Today: Still a Bit of Magic
From politeness in prompts to chain-of-thought breakthroughs, we’re still learning what works. Some "tricks" are fading, but clear, structured prompting remains core. As models evolve (like GPT-4.1 and Claude 3), prompt style must adapt too — and companies now publish official guidance on model-specific prompting.
What Makes Prompt Engineering Hard
Dan broke down what still makes this challenging:
Translating tacit knowledge into text
Handling subtle context in real-world scenarios
Designing reusable, portable prompts across models
Even with smarter models, clear instructions remain an art — and a differentiator in production LLM apps.
Looking Ahead: What’s Next for Prompting
We discussed where the space is going:
Formalization of tooling (like PromptHub, MCP protocols)
Agents, reasoning, long-term planning
Voice interfaces as a rising trend
More companies building prompt ops and infra stacks
Prompt engineering is here to stay — but it’s becoming more sophisticated and integrated.
Getting Started with Prompt Engineering
Dan’s advice for beginners:
Use models a lot. Prompting improves with practice.
Test across models. Understand how they interpret inputs.
Explore community prompts. Learn by forking and tweaking.
Read the PromptHub blog for deep dives and practical guidance.
Live in London: The First In-Person Prompt Engineering Event
Dan and I are excited to co-host the first in-person promptengineering.rocks Conference on October 16 2025 in London — following the success of the virtual event in 2023. Stay tuned for more details!
Thanks for tuning in!
You can connect with Dan on PromptHub.us and follow the latest prompt engineering discussions on their blog and community platform.
Share this post