BY Sudipta Lahiri | March 24, 2013 | Blog

Introduction

Within the SwiftKanban development team, we have evolved our Engineering practices, combining principles of Test Automation, Continuous Integration and Kanban thinking. At the same time, as we look to hire people for such a development environment, we have found it difficult to get people who understand such an environment, even though we can hardly be called bleeding edge! Hopefully, this post helps in understanding in our Engineering environment.

Daily Stand Up Meetings

SwiftKanban development

The day starts with a stand up meeting at around 9 am. Since we are distributed team across three different locations (3 cities, 2 countries), many of our team members join the call remotely. We conduct the meeting using our product development Kanban Board (shown below), so everyone is logged in prior to the call:

The purpose of this meeting is to get a quick overview of the team’s current situation, find out if any development tasks are blocked, select the day’s tasks, discuss any customer identified defects (which we “Expedite”) and assess any broken builds.

Blocked cards take special attention in the Stand-up call. Whenever needed, the discussion is recorded as comments against various cards discussed.

Daily Stand Up Meetings

We try to complete the call in less than 30min, but this does not happen often! Usually, one or two issues become time-hogs. Sometimes, a team member might interrupt and ask for the issue to be taken offline; but we do have some “silent” team members who prefer not to interrupt and we try and encourage them to do so! Over a period of time, we have learnt to split the call into two parts: a) the regular stand-up call and b) a follow-on discussion for specific issues when only the relevant team members need to stay on. Using the Kanban board, we are able to filter and zoom into the needed areas for a fairly effective meeting most of the time!

Starting with the CI Run

Once the Standup call finishes, every developer checks the CI (Continuous Integration) run output to see if anything was broken from the previous night’s full automation run. For this, there is a consolidated failure run report from both Junit (unit testing environment) and Sahi (functional test automation environment) sent to all test team members from the build (as in the right column). The report reflects not only the failures in the last run but also highlights the automated test cases that have failed in the last 3 runs. We have experienced that test automation failures are not always linked to the product source code issue or the test automation source code but to a random system behavior (for e.g., where the server does not respond back in time). Hence, repeat failures is important to identify true failures.

Automation Run ReportFurther, we have an artifact repository where we store the Sahi HTML reports, which have more information about the failures. Developers use these for further analysis.

If the developer’s names appears against a specific failure, their first task of the day is to fix the issue(s) reported and then move on to the regular cards on the Kanban board.

Developers use Eclipse for both Automation script failure analysis and Junit failure analysis. Junits can be corrected and tested on-the-fly in Eclipse.

Development Process

One of the unique aspects of our development process is the association of an automation script to an individual owner. This was very important because prior to doing this, it wasn’t clear who was responsible for fixing a failed script. It is hard from a nightly run to identify which of the check-ins(s) (from a series of checkin(s) done throughout the day) is responsible for this script to fail. Hence, we assigned the original developer for the script the responsibility to fix it. It turns out to be faster too in most cases because of their familiarity with the script as it’s owner.

For this reason, we use the Test Management repository of SwiftALM (where the test suite inventory exists) to manage our test assets. A snapshot of the repository is shown above.

Our source code is also integrated with Sonar dashboard. On every CI run, the dashboard gets updated and provides valuable information about the java code. We have enabled various plugins on Sonar like PMD, findbugs etc. A developer is expected to look at this dashboard and correct the violations in their module’s source files on a continuous basis. The Sonar dashboard gives a good insight into the coding pattern of developers and helps the team in figuring out better ways to write code.

Development

Development

Once the issues from the last CI run are addressed, the developer’s focus shifts to their main development card. Customer defects (blue cards) are our equivalent of the “Expedite” Class of Service and are dealt with first. Next, they work on any pink cards – internally identified Defects – in their queue. Finally, they work on the User Story that they have been assigned to or pulled. Sometimes, a developer may also have “Engineering Tasks” (sometimes referred to as Technical User Stories) which typically get taken up last. This “priority policy” becomes the basis for developers to pull the next card when they are done with their current card.

A few additional policies that we have defined:

  • User Stories flow through the Design and the Functional Automation lanes.
    At the end of the Design stage, a T-shirt sizing estimate is converted into an effort hours estimate.
    While code review is done for all checked in code, Automation code review is only done on a sample basis.
  • 2. Developers are also free to add tasks to the card, and if needed, assign some of the tasks to another developer who is expected to pitch in.
  • 3. Developers work on a separate “feature branch” in SVN created for a User Story. This branch becomes the development workspace for all the developers working on the User Story. This facilitates easy coordination between the development team and informal code reviews, which can also start since the code is already committed. Once development is complete, the developer merges the changes to the main branch (trunk) on SVN and deletes the feature branch that we created.

We use Cruise Control which gets the latest code, does the build, runs the Junits, deploys the build on QA server and runs functional Automation on all the 3 browsers that we certify the product on..

Defect Validation

Developers are expected to keep an eye on the validation lane. If they have filed an internal or customer filed defects, they are expected to validate the fix on the QA environment and if the fix passes, move the card to the “Ready for Deployment” lane. User Stories are validated by the Product Manager.

Deployment

We are not in a continuous deployment environment. We deploy every time the number of “ready to deploy” cards reaches 20. We do not deploy automatically because we do have some test cases that need to manually validated for some technical reasons (third party product integrations or test scripts that fail because of our Automation tool issue).

We hope this gives you a good idea of the daily life of a (Swift)Kanban developer! All said and done, it is a far more exciting, productive and energetic work environment than we have seen elsewhere and we thoroughly enjoy it!

Leave a Reply

Your email address will not be published. Required fields are marked *

Top