Making Sense of Agile Metrics

Leveraging the Agile methodology offers organizations a chance to streamline their software development process; ultimately making their business more efficient. Measuring the impact of Agile on an application engineering team becomes easier with the use of metrics. The ultimate question is: what kinds of metrics offer the meaningful information and actionable insights software engineers need?

What follows is a look at some examples of useful Agile metrics and how they are able to truly make this modern methodology work for a software development shop. Perhaps your team needs to look at using them as well?

Useful Agile Metrics for Software Development

An article for Extreme Uncertainty analyzed the use of Agile metrics and offered a few insights on which ones added value to the software development process. Let’s check them out.

A Burnup Chart gives the project manager or Scrum Master a view of a project’s overall progress by displaying a graph of how many stories are completed during each iteration compared to the total number of stories in the project. This is a simple metric able to be shared with business stakeholders curious about the current status of the work.

The article author, Leon Tranter, commented on the need to fully estimate the effort of each story for the Burnup Chart to be meaningful. If that estimation hasn’t been completed, he suggests using an average for any future stories.

Metrics related to Agile Stories

Estimating the development time for stories becomes easier when using metrics aimed at tracking the work spent on these portions of a project. Story cycle time measures the period it takes for a story to go from a ready for development state to its completion. Be sure to take into account the number of resources working on a story for the most accurate view of the overall effort.

Story lead time includes the period between the creation of the story and its ultimate completion. Subtracting cycle time from lead time helps measure the effort spent in analysis.

Story count is essentially the average number of completed stories during a sprint. Once again, combining these three metrics helps to measure the efficiency of a software development team during a project. It also serves nicely when estimating the effort on future projects or sprints.

Use First Time Pass Rate to track Quality

First time pass rate is a percentage used to track the test cases that pass either system integration testing or system testing on their first try. Tranter feels this is an especially vital metric for measuring the overall maturity level of a software development team’s use of Agile. His expectation for teams familiar with Agile is a rate of 95 to 100 percent. It definitely gives teams new to the methodology a goal worth reaching.

Hopefully, this quick analysis of a few Agile metrics offered some ideas on adding them to your own team’s software process reporting. They are especially worthy of consideration for shops embracing Agile for the first time.

Keep coming back to the Betica Blog for future news, stories, and insights from the rich world of software development. As always, thanks for reading!

Quality Assurance for Smartwatches and Wearables

As smartwatches and other wearables are more embraced by consumers, compelling software applications remain the key to increased adoption of this new technology. With developers working on the cutting edge of a new software platform, QA engineers are also faced with learning how to properly vet applications for wearables. Do some of the same issues found when testing smartphones also apply on this new mobile device type?

This article offers an overview of the QA process for smartwatches and other similar devices, with a focus on app testing for Apple’s watchOS and Google’s Android Wear platforms.

Is Smartwatch Testing just an Extension of Mobile QA?

Some software development firms see wearable platforms as suitable for modified versions of their existing mobile app library. While this works for many simpler apps, the extremely limited screen real estate on a smartwatch somewhat tempers this enthusiasm. Wearables typically leverage a different set of gestures, which needs to be taken into account during development and testing.

Additionally, some smartwatch apps work in tandem with an app on a smartphone; for example, receiving a notification about an event on a wearable that gets handled on a paired smartphone app. In this case, a QA team is responsible for testing apps on two different devices simultaneously. Despite some similarities with mobile, software quality assurance on the wearable platform really needs to be treated as a different entity.

Software QA on watchOS

App development for the Apple Watch targets watchOS, Apple’s wearable operating system. Developers are able to use either Objective-C or Swift as a programming language, in a similar manner as iOS apps. Also like iOS, UX and design guidelines with watchOS are very strict, and Apple enforces them as part of their submission process, so your QA team needs to include vetting these guidelines as part of their testing regimen.

The Apple Watch user interface includes a digital crown and a variety of unique sensors, as well as the touchscreen, buttons, and microphone typical of a mobile device. Any watchOS test plans need to consider all possible inputs to the smartwatch. An Apple Watch connects to an iPhone using WiFi, Bluetooth, or NFC (near-field communications), so keep this in mind when testing watchOS apps that work with an iOS app.

Include a few Apple Watches as part of your mobile test farm if your development team plans on building apps for watchOS.

Android Wear and Quality Assurance

Google’s smartwatch operating system, Android Wear, is closely based on the regular Android platform. One advantage compared to the smartphone OS is Google made the UI interface standard among device manufacturers, which makes development and QA easier with little smartwatch model fragmentation. While not as strict as Apple, Google also provides a host of design guidelines and principles your developers and QA team need to be aware of.

While the Android SDK provides a Wear emulator, leveraging actual devices as part of your QA process is a must, as with the Apple Watch. With 10 different manufacturers offering Android Wear devices, acquiring models of the different smartwatches makes sense, but Google’s standardization of the interface lessens fragmentation problems compared to Android smartphones.

The inputs of an Android Wear smartwatch are similar to the Apple Watch, with the exception of no digital crown. In addition pairing with Android smartphones, Wear smartwatches can also be connected to iOS devices. Keep both of these points in mind when creating your test plans.

If you want more detailed information on software testing for wearables, check out this eBook by QA engineer, Daniel Knott. In addition to watchOS and Android Wear, he also covers QA on the TizenOS and PebbleOS wearable device platforms.

Keep checking back at the Betica Blog for further insights on software quality assurance – no matter the platform!