Requirements Engineering Is Already Happening in Your Team

Your team runs requirements engineering every week. They call it refinement.
Or that conversation on Thursday where the product owner, the developer, and the tester sat down and argued about what "confirmed" actually means on an order status screen.
Requirements engineering has a branding problem. The name suggests a formal discipline with roles, gates, and deliverables - the kind of thing a large enterprise does before handing off a frozen spec. That image is why most teams reject it on sight: "We're agile. We don't do that here."
The argument isn't wrong. The conclusion is.
You've Been Doing This for Years
Requirements engineering is not a phase. It's a set of recurring activities that need to happen before and during every feature you build. Most teams do all of them. The ones that struggle do them accidentally or out of order.
The question is never whether your team does requirements engineering. It's whether they do it on purpose.
The pattern I see most often: teams that have trouble with requirements aren't skipping the work. They're doing it informally. Assumptions made in passing. Acceptance criteria invented during code review. The work happens - it just doesn't happen where anyone can see it, challenge it, or build on it later.
The Four Things Requirements Engineering Actually Is
Requirements engineering has four core activities. They form a cycle — not a linear sequence. They repeat per feature, per sprint, and sometimes per conversation.
The RE cycle is not a project phase - it runs continuously. A single refinement session can move through all four activities in thirty minutes. The output of each activity becomes the input for the next.
These four activities are the standard defined by the International Requirements Engineering Board (IREB) - the body behind the CPRE certification, the most widely recognized qualification in the field. The official terms are Elicitation, Documentation, Validation, and Management. This article uses Structure and Communicate instead of the latter two because they describe better what actually happens in small teams more precisely: requirements get shaped into something buildable, then handed to the people who need them in a format they can actually use.
Elicit - find out what stakeholders actually need, not just what they asked for. This is not transcription. It's the work of asking follow-up questions, exposing implicit assumptions, and surfacing decisions that haven't been made yet. A refinement meeting where the team challenges a ticket is elicitation. Writing down what the PM said without questioning it is not.
Structure - translate raw intent into a form developers can act on. Epics, user stories, acceptance criteria, Gherkin scenarios. This is where an idea gets edges. Not every feature needs all of it — but something needs some of it, or the developer invents the structure alone at the keyboard, under time pressure, without the people who had the context.
Validate - check that the structured requirements reflect the original intent, cover the edge cases, and don't contain gaps. The definition of ready is a validation ritual. So is re-reading a ticket one day after you wrote it and realising the unhappy path isn't described anywhere. Validation doesn't require a formal review. It requires someone asking: does this actually say what we meant?
Communicate - get the requirements to the people who need them, in a format they can use. A Gherkin feature file checked into the repository. A clear PR description that explains the intent, not just the diff. The format matters because a requirement nobody reads has the same effect as a requirement that was never written.
The Gap Between Collecting and Eliciting
Most teams get this wrong in the same way.
Teams that document well can still ship the wrong thing. They write tidy tickets. They have templates for everything. And then they build something the user didn't want, because nobody worked out what the user actually needed before filling in the template.
Collecting requirements is writing down what you were told. Eliciting requirements is structured investigation - questions that expose gaps, scenarios that surface edge cases, challenges to vague language before it becomes vague code. From the outside, both look like someone typing notes. The quality of what gets built is completely different.
Consider a ticket that reads: "As a user, I want to see my order status." Four words of scope. The developer who picks it up will make at least five decisions that nobody made explicitly: which statuses exist, whether cancelled orders are visible, what happens when the status hasn't been updated, whether the screen is the same for guest and registered users, and what "status" means when an order is partially shipped. The ticket was written by someone who knew all the answers. It just never occurred to them to write the answers down, because the questions felt obvious.
The gap becomes visible at one specific moment: when a developer comes back two days into a sprint with "I have a question about this story." That question almost always represents an assumption that wasn't examined during elicitation. Not a new question - an old decision that nobody made.
A team with strong elicitation practice hears that question in refinement. It's the same question either way. What changes is when it gets answered, who answers it, and whether the answer goes anywhere permanent.
Why "We're Agile" Closes the Wrong Conversation
BDD is a requirements engineering practice. Story mapping is too. So is writing acceptance criteria - even the three-line, just-enough version. None of these are waterfall.
The agile manifesto never said "stop understanding what you're building." It said: stop spending six weeks writing requirements that nobody reads because the world changes before you finish writing them. That's a real constraint. The response to it is continuous, lightweight requirements work - not the absence of requirements work.
Moving fast with poorly understood requirements doesn't produce fast delivery. It produces fast rework. The features ship quickly. The correct features take much longer.
Requirements should be discovered continuously and documented lightly. Discovered continuously - not once upfront, not improvised per ticket. Documented lightly - not in a 60-page spec, not in a four-word ticket. There's a useful middle that most teams are not deliberately aiming at, because they've decided the conversation about requirements is the conversation about waterfall. It isn't.
Some teams take the Continuous Discovery argument further: ship quickly, put it in front of users, and let real usage replace upfront requirement conversations. That's not wrong - it's the next iteration of the same RE cycle, not a replacement for it. Discovery from usage is elicitation too. It just happens after a release rather than before one.
Systematic vs. Accidental - That's the Whole Question
The difference between teams that do this well and teams that don't isn't which ceremonies they run. It's whether the four RE activities happen deliberately or by accident.
Deliberate means elicitation happens before structuring. Validation happens before handoff. The conversation where someone asks "what does confirmed mean?" happens in refinement, not in a PR comment, not in a customer support ticket six weeks after launch.
Accidental means requirements engineering happens in fragments - in Slack threads that expire, in the developer's head, in the customer's feedback after the feature ships. The activities occur. They just don't accumulate. Nothing from today's refinement becomes a constraint on next sprint's scoping.
Barry Boehm's research on the cost of change made this concrete decades ago: a requirements error found during development costs roughly five times more to fix than one caught during elicitation. Found in production, it costs between twenty and a hundred times more. The numbers vary by study and context - but the direction has never been in dispute. Moving the question forward is always cheaper than answering it later. The activities are the same either way. What changes is when they happen - and what they cost.
Same four activities, in the right order, leaving a written trace. Not because written documentation is the goal. Because the trace is what prevents you from having the same conversation three more times.
Every team does requirements engineering. The question is whether yours leaves anything behind.
Speclr is built around the four IREB core activities in a single continuous environment: Discover customer needs and the product vision, Structure them into a user journey, Refine and validate them using user scenarios, and communicate them through an iterative process. Try it now

