About a year ago, we did a project for a publicly traded company in Austria. $2 billion a year in revenue. Worth about $35 billion on the market. A descendant of the founding family brought us in after watching one of our early YouTube videos. He wanted a competitive intelligence system — scrape competitor websites, LinkedIn profiles, product updates, news. Generate a bi-weekly executive newsletter. Simple enough.
We built it. It worked. We delivered exactly what was scoped. And then we walked away from what could have been a million-dollar-plus partnership.
Not because the system failed. Because we made six mistakes that, in hindsight, are the difference between being a dev shop and being a partner. Every one of them taught us something we now build into every engagement.
Mistake 1: We Didn’t Define Success Deeply Enough
The success criteria was straightforward: does the system work? Does the newsletter provide good content? Do the stakeholders sign off?
That’s not success. That’s a checkbox.
What we should have asked: if we ripped this system out tomorrow, would anyone panic? Would they throw a fit? Would something break in their operation? If the answer is no — if this is just a nice-to-have that lands in someone’s inbox and maybe gets read — we haven’t built something that matters.
The newsletter we delivered was another piece of content competing for attention against every other newsletter these executives receive. What would have made it indispensable is context. Raw competitive data is a commodity. The value is in how that data maps to each executive’s specific role and decisions. The product manager needs different intelligence than the head of sales. The engineering lead cares about different signals than the marketing director.
We never talked to those people. We never asked what would make them stop what they’re doing to read this. We never asked what 10 out of 10 looks like for each of them. We treated a newsletter like a deliverable instead of treating it like a product that needs to find product-market fit with its users.
The fix is simple in concept: go deeper on defining success upfront. Ask the uncomfortable questions. What would make this so valuable that losing it would be detrimental? If you know anything about startup product development, this sounds familiar — because it is. Building internal AI systems requires the same obsession with user value that building a startup product does.
Mistake 2: Time to Value Was Too Long
We deployed the system in about 45 days. Testing ran another two weeks. So roughly 60 days before they had a working product in their hands.
With the tools we had back then — N8N, RAG with vector databases, models that weren’t nearly as capable — 45 days was a fast turnaround. But here’s what happened during testing: the system was hallucinating. The chatbot component was producing unreliable answers. The main stakeholder’s first impression of the product was that it couldn’t be trusted.
Even though we fixed the issues within two weeks, first impressions are hard to undo. Once someone sees a system spit out wrong content, they carry that skepticism forward. Every future output gets evaluated through a lens of “is this actually right?”
What we should have done is progressive deployment. Module by module.
Step one: set up the scraping, get the data flowing, and send the stakeholder a rough summary — not a polished newsletter, just organized information. “Here’s what we’re pulling. Give us feedback on what’s valuable and what isn’t.” That would have accomplished three things: faster feedback loops, collaborative development, and an earlier first impression built on real data rather than a buggy prototype.
Step two: build the chatbot on top of the collected data. Let them start querying it. Capture their actual questions — which would have told us exactly what kind of analysis the newsletter should contain.
Step three: generate the newsletter informed by their real usage patterns.
Each module delivers value on its own. Each one builds trust. Each one generates the feedback needed to make the next module better. Instead, we went into a cave for 45 days and emerged with a finished product that had a rocky first impression.
Mistake 3: We Only Talked to One Person
We worked with one point of contact at a company with dozens of executives who would receive this newsletter. We never met any of them. We never asked to. We assumed our main contact would relay everything we needed to know about what each stakeholder valued.
He was playing middleman. He had better assumptions than we did about what the executives wanted, but he was still assuming. We were playing telephone — the information was filtered through his perspective of their needs, not their actual needs.
If we had talked to the other executives, several things would have been different. We would have understood their individual research habits and could have built personalized intelligence feeds. We would have built relationships with multiple people at the company, creating multiple internal champions. And when it came time to propose the next project, we would have had advocates across the organization instead of relying on one person’s enthusiasm.
This is a pattern we now treat as non-negotiable. During kickoff, we require all stakeholders to be present. It’s not optional. Because even if the current project only touches one person’s workflow, the goal is to get on the radar of leadership broadly. When they start thinking about AI for their own departments, we want to be the team they already know and trust.
Mistake 4: We Treated It Like a Project, Not a Partnership
This is the umbrella mistake. Every other mistake flows from this one.
At the time, we were optimizing for reps. We wanted to take on as many projects as possible across different industries and use cases so we could build our skill set. We were cutting our teeth. And in fairness, that strategy worked — we got massively better, saw dozens of use cases, and developed the methodologies we use today.
But the cost was real. We checked every box on the scope document and called it a win. That’s dev shop behavior. A partner would have come back with: “Here’s what we built. Here’s what we learned about your operation while building it. Here’s where we think the next opportunity is. Here’s the ROI analysis. Here’s what we’d tackle next.”
We didn’t do any of that. We delivered the thing, shook hands, and moved on to the next project.
Mistake 5: The Handoff Was Shallow
We migrated the system to their N8N instance, wrote documentation on the architecture, and had one demo call. Our main contact — who didn’t know N8N — probably forwarded the docs to his tech team, who put them on a shelf.
What should have happened: bring in the technical team who would actually maintain the system. Run a workshop. Walk through the architecture together. Change some variables live so they understand how the prompts and logic work. Get buy-in from the people who will be responsible for keeping this thing running and improving it.
A real handoff creates internal champions on the technical side. Those champions become advocates for the next project. Skip the handoff, and the system slowly decays until someone decides it’s not worth maintaining.
Mistake 6: Discovery Was Too Narrow
This ties back to everything. Our discovery was surface-level. We understood one person’s idea for a newsletter. We didn’t understand the organizational need it was supposed to serve.
If we had gone deeper, we might have discovered that the newsletter wasn’t even the right format. Maybe what the executives actually needed was a dashboard. Maybe it was a chatbot they could query on demand. Maybe it was a weekly briefing structured around upcoming strategic decisions. The newsletter was one person’s idea for how to deliver competitive intelligence. There were probably better ideas hiding in the actual workflows of the people who would use it.
Coming in as the expert — the authority on what AI can do and how to build these systems — we should have said: “The newsletter is a feature, not the product. Let us understand the actual problem you’re solving and propose the right solution.” That would have established credibility, built trust, and produced a system that actually mattered to the organization.
What Changed
Three things, and they changed everything about the quality of our work and the relationships we build.
Mandatory leadership alignment at kickoff. Every stakeholder who matters is in the room from day one. Even people whose work won’t be touched by the current project. They need to know who we are, what we do, and start thinking about where AI could create value in their world.
Deep discovery with real metrics. We align on KPIs, success criteria, and the actual business value we’re trying to produce. We meet with the people who will use the system. We understand how they currently do the work and why. We define success as something that would be painful to lose — not just functional to have.
Progressive deployment. Module by module, getting feedback at every stage, making the development process collaborative. The client understands the system because they helped shape it. They become internal champions because they have ownership over the outcome.
The difference has been night and day. The depth of relationships we build, what we learn about industries and businesses, the systems we’re able to deliver — all of it improved dramatically.
Looking back, calling these “mistakes” is generous. We did what was on the scope. We delivered a working system. By the standard of most AI projects, it was a success. But the standard shouldn’t be “did we check the boxes.” The standard should be “did we build something so valuable that ripping it out would be detrimental, and did we build a relationship deep enough to keep going?”
We didn’t. And that’s the million-dollar lesson.