Last month I contributed towards the agile forum that was collecting a list of good agile trainers and companies. Given below is the list of people I  recommend based on the following criteria.

  • Personally met them in conferences/workshops
  • Read about their work regularly
  • Did my Agile training with

Australia
Kane Mar http://kanemar.com/about-me/ (I did SCM training with Kane)
Rowan Bunning http://www.scrumwithstyle.com/about-us

Asia
Vernon Stinebaker http://www.scrumalliance.org/profiles/16047-vernon-stinebaker

Europe
Jurgen Appelo http://www.management30.com/about-the-author/

USA
Lyssa Adkins http://www.coachingagileteams.com/about/
Tobias Mayer http://agilethinking.net/aboutme.html

Training Companies (I’ve see these groups promoting their training products in conferences regularly)
Agile University http://www.agileu.org/
Agile Academy http://www.agileacademy.com.au/agile/our_courses
Rally Software http://www.rallydev.com/services/agile-coaching-and-training-services
Version One http://www.versionone.com/training/agile_training/
Lean-Kanban University http://www.leankanbanuniversity.com/

If you know good Agile trainers and companies do share with me and I’ll update this list.

In previous post I explained just one scenario where lack of user feedback during development can land your project into the wilderness with your clients.

Traditionally in software teams roles like business analysts, product owners and usability specialists are responsible for getting client’s feedback by variety of means. It’s time to empower our teams with new feedback and interface testing tools, so they can grab every opportunity to collect smart feedback data and build great products, save cost with higher client satisfaction.

Let me propose few ideas for software development teams

  1. Get closer to your clients by measuring and reducing feedback loop travel time.
  2. To me feedback is a “Value Channel”, so if you create new channels you’ll increase the value of your product.
  3. Use new tools available in the market to collect your feedback.
  4. Make data based decisions rather than emotional ones by validating your idea.
  5. Client’s time is important and providing feedback shouldn’t be more than 2 clicks away.

List of few feedback and interface testing tools

Verify - verifyapp.com
Notable –  notableapp.com
Bugherd - bugherd.com
Usabilla – usabilla.com

What’s your feedback loop travel time and what tools you are using to collect data?

So why not validate your strategy before or during your project before you spend too much time and money.

Ever been a part of a team where

  • You were handed a UI design on a new project.
  • Your teams starts a project and build what it was asked for.
  • This design came from some other team or group.
  • You built this new application it and tested it (all ready to go).
  • Then the aha moment comes when you present this newly build application in front of your clients waiting for a WOW reaction.
  • Opss… your clients say we needed a Zebra and this looks more like a Donkey.

Did someone missed the point somewhere in the whole process?

You’ve got it right! while you were busy building the perfect Zebra no-one asked users for their timely feedback during the development. I am not saying that’s the case with every team but reality of software development business is that teams always try to make perfect application in the first attempt to wow their clients.

Research shows the more client involvement in the product development the better the outcome. The quick you receive feedback the better position you are in to make changes with less cost and you might end-up close to building the perfect Zebra that your client wanted :-)

Let me ask how your team is collecting feedback from clients and how you are making smart data-based decisions?

In order to build your perfect Zebra you can leverage new tools to collect user feedback, these tools are smart, cheap, scalable and will guide your team to make data-based decissions.

In the next blog post I’ll be explaining more about these tools.

Late 2010 I started a project with my new team, everyone was relatively new to agile development. Although we were defining acceptance criteria for user stories but Product Owner and Quality Assurance team had different expectations on what is acceptable to them in a story. Below was my attempt to highlight the difference between different expectations.

Acceptance Criteria: Usually defines the scope of a user story and product owners expectations. Clarifies the product owners intent and what they see as acceptable for clients.

Definition of Done: It’s how we define the quality of a story, steps we must take to ensure the highest quality work delivered within a user story. This can be a quality agreement between developers and QAs.

Example Story: As a guest user, I want to be able to view a shared dashboard so I can view the latest trends in my industry.

Acceptance Criteria:

  1. A guest user must provide valid email address to access a shared dashboard.
  2. Dashboard access and privileges are correctly matched for a given email.
  3. A guest user must be able to view a dashboard on all supported browsers.

Definition of Done:

  1. No fatal errors in code.
  2. No new notices.
  3. Must have unit tests.
  4. Completed code docs.
  5. Code review finished.
  6. Code checked into the build branch.
  7. Interfaces works on all supported browsers.
  8. Performance checks are successful

Let me ask how your team writes story acceptance criteria and creates quality parameters?

The above comparison helped my team to have a greater understanding of delivering high quality features; it also helped us to create a fun development environment where team members were not getting frustrated over compiling errors.