Any time we begin work on a new project for a client, one of the first things they want to know is the project timetable. How long will a project take? How long until we start seeing deliverables?
These are tricky questions to answer. There’s a lot that goes into software development, and no two projects are exactly the same. Nevertheless, we know how important the process is to our clients, and with that in mind, we’d like to share some details on how our own software development cycle helps our clients meet their goals.
It’s safe to say our process has evolved over the past few years, and if you’ve worked with us in the past, you might be surprised at how we’ve changed. But before we dig into the details, let’s start with a bird’s eye view of our approach, from the earliest planning stages to final delivery.
Our process begins with a thorough evaluation of each client’s needs and current system. These high-level planning discussions give us the data we need to begin the engineering phase, where we build out specifications and develop solutions.
From there, each milestone undergoes an iterative process of quality assurance (QA) testing, followed by a final sign-off from the client. After approval, the project goes live, and we begin our ongoing process of support and maintenance.
This is the basic framework we use for all projects, but the specifics vary based on whether the task is an entirely new project or an extension to an existing build. But what doesn’t change is our strategic approach to the software development process.
Our goal is to plan out tasks that make sense for the story arc of the project and can be managed in a logical fashion. We don’t tackle tasks out of order or try to “knock out” the easy ones. We take a simple, tactical approach to the development timeline that minimizes complexity and keeps everyone on the same page.
It’s normal to wonder how long a project will take or how much it’ll end up costing, but keep in mind that there’s a lot of variability in the software development life cycle.
"We try not to make estimates before going through the engineering phase,” says Brad Gustavesen, ten24’s Chief Marketing Officer.
While we’re certainly able to come up with estimates, ranges, and ballpark figures for project deliverables, it’s irresponsible to quote hard numbers before we get a chance to look at a system firsthand. In lieu of concrete figures, we leverage our own experience, insights, and past work history to come up with price range extremes, and then cost out the options from there as baseline estimates.
Typically, we categorize any substantial project build as a “new project,” depending on how many work hours it’s expected to take.
In the engineering phase, all parties know what’s being built, goals have been established, and all the broad strokes have been covered. Here, we dig deeper into the details to better understand the functionalities and workflows we’ll need to apply to start development. This means reviewing specific integrations, vendor relationships, IT considerations, marketing needs, shipping options, and more.
From there, we build what we call “user stories” that provide context for each integration. These tools map out each of the individual functionalities that will be built into the project and help keep the deployment focused on the benefits provided to the end-user. This part of the planning phase also involves creating a RACI document to clarify responsibilities and a functional spec document (FSD) that outlines the project, key objectives, user stories, and acceptance criteria.
These user stories are crucial to our engineering wireframing process, in which we create mock-ups of the planned application and assign user stories to different elements as outlined in the FSD. Any given project may have hundreds of these stories, each of which will be placed into the project management queue and assigned out to a developer.
Generally, we handle this work in two-week sprints, with the first week devoted entirely to development work and the second week devoted to QA. At the most, we’ll plan ahead only three to four weeks before re-assessing.
"We try not to plan out the entire project’s worth of sprints,” says Brad. “By planning no more than a month out, we’re able to stay flexible while still getting a good estimate of how much time each milestone will take.”
And it’s important to note that in the QA process, developers test these stories rather than the tasks themselves. By testing the story, it’s easier to tell whether the client’s need was adequately addressed in context, rather than simply testing whether the webpage function performs as it should. As these stories get approved, they’re passed along to the client—usually done in batches to keep things efficient.
Support projects involve ongoing tasks such as maintenance, building small enhancements, or any other function related to day-to-day operations. Our approach here follows a similar framework to the one outlined above, though on a smaller scale.
When we receive a request from a client, it’s sent to a support manager, who reviews the issue and determines how urgent it is. Does the request demand immediate resolution? Will it need an estimate? Or is it a simple, quick fix? These requests tend to build up over time, so request prioritization is essential for managing the different issues that come our way.
We then apply the sprint process to get work done, with our support manager working with the project’s technical lead to ensure that no details are missed. This sprint usually involves two to four weeks of work, with roughly half the time spent on development and the other half spent on post-development QA. After approval, the support project is ready to launch.
Code review, simply put, is when one’s fellow developers proof each other’s code for mistakes, best practices, and standards. Having a peer review your work with fresh eyes is essential to catching errors and ensuring code integrity as well as facilitating knowledge sharing and transparency across the team. Slatwall’s code review policy not only helps prevent errors, but also places an emphasis on becoming a better developer and growing skillsets through feedback. Whether the feedback corrects a bug or suggests a better way to accomplish a task, the end result of creating a better product and a well-rounded development team is worth the additional investment in time.
In addition, by reviewing code in smaller chunks, developers gain a micro-perspective of the project over a longer period of time. As a result, those reviewing developers are not starting from scratch if they are called on to contribute to the project.
Reviewing and analyzing code also ensures that obvious errors or issues are caught and resolved before the QA and client teams start testing and result in lost working time. Essentially, code is reviewed twice during the process, once by a peer developer and then by a manager.
Finally, and most importantly, Slatwall is a PCI Level 1 Certified hosting provider that requires us to comply with strict security standards for our development and security practices. A code review policy and proof of following this policy are key pieces in being compliant and also a requirement of an audit that our infrastructure must undergo annually. Each and every piece of code written and produced by the team is reviewed with security standards and practices in mind - it’s not optional.
The QA process is crucial to project success for several reasons. Aside from being quality control for bugs and errors, it’s an important part of User Acceptance Testing (UAT). While our QA teams aren’t the end-user, their feedback is crucial to the process. And aside from that, it’s simply a matter of respect for the client’s time.
"Assumptions are expensive statements in our business,” says Brad. When project details get lost in the communication shuffle, deliverables become unfocused or start to suffer from scope creep. A thorough QA evaluation ensures that everyone’s expectations are the same and that everything works the way it should.
"In order to prevent defects before they arise, our QA Team works very closely with the projects Client Solutions Manager (CSM) and developers, performing QA testing as part of every sprint,” says Jada Flournoy, QA Lead at ten24. “This not only guarantees we are able to deliver a high-quality product that meets the needs, expectations, and requirements of the client, but also builds trust and loyalty."
Our commitment to QA highlights another central component of ten24’s development philosophy: Client communication comes first.
We understand that many clients prefer to take a hands-off approach to the development process, but the best results come from regular interaction between partners. Our approach here is to bring clients in early and involve them in discussions every step of the way, from planning to deployment to training.
"There’s a lot of value in being iterative with certain processes,” says Brad.
Specifically, we prefer to hand off each “sprint” to the client for review after it finishes our QA evaluation. While some agencies view this as too “high-touch,” in our view, this regular contact is essential to a positive outcome.
Overall, our goal is to establish an engineering process that allows our team (including the client!) to build a roadmap for the project that flows from engineering to development, to QA/UAT, and finally to support. Our approach is one we’ve been working on for years—and with a more hands-on approach to development than many agencies take, we’re confident in our ability to handle any software development task that a client may need.