Monday, February 02, 2009

Testing Tool Architecture Analogy!!

We have compared software industry challenges with manufacturing before. May be it's our passion to see how similar different verticals are!

Take a look at the architecture of latest Mac Pro, which is based on Intel Xeon processors. Now checkout iTKO LISA (a Testing, Validation and Virtualization tool) architecture blueprint:
And what a coincidence that both Apple and iTKO are beating analyst expectations!!
We know that all great minds think alike. Now we know that all great architectures look alike! :-) Just kidding.. but it is amazing to see so much of commonality between a software and a hardware architecture!!

Sunday, May 18, 2008

SOA Projects: Over budget, Over time and Under quality!

Is your SOA project running over budget? Is it because of over-staffing to meet the deadline, or is it because of poor budgeting, or both?

In last couple of years, I have not only seen SOA projects overrun both budget and time but also deliver poor quality. It is no surprise that my observation around quality is inline with Gartner's prediction, according to which the unplanned downtime in SOA-based businesses would go up by 20% because of application failures.

So why are SOA projects overrunning budgets and time, while delivering poor quality? Shouldn't SOA be enabling faster-time-to-market and lower-costs, instead?

As we all know, any software development project has four basic variables: time, budget, scope, and quality. Change is the only thing that is constant in a typical SOA project, which means that the scope cannot be the culprit. Remember, the reason IT adopted SOA in the first place is to be able to respond to changing business requirements... i.e. The Business Agility!

Rest of the three basic variables (time, budget and quality) are impacted because of changing scope and project managers inability to think out-of-the-box. Let me explain by using a manufacturing plant as a analogy to a SOA development process! Interestingly, there is a striking similarity.

Manufacturing industry has come a long way. Japanese competition forced Americans to learn the advanced management techniques of how to run a production plant effectively and efficiently. One of the main reasons plants use to overrun budgets and delay shipments was their inability to understand the phenomenon of “Dependencies and Statistic fluctuations” which exists when delivery of a single product depends on several components, which are dependent on each other. Dr. Eliyahu M. Goldratt explained this theory of constraints (TOC) in his book entitled "The Goal".
The theory of constraints is the application of scientific principles and logic reasoning to guide human-based organizations.
Software project managers, however, have not yet fully understood the fact that a similar phenomenon now also plays in the world of SOA based development. This is mainly because the architecture paradigm allows companies to build software, which depend on services manufactured by different organizations or even third-party suppliers and partners. One of the basic principles of TOC is Convergence. According to convergence, the more interconnected the organization is, the lower the number of constraints it will have. When we apply this to SOA, we know that the number of components and teams are growing and they are also loosely connected, which means the number of constraints are growing.

SOA architectural dependencies spill into team dependencies, which in-turn lead to redundant implementations and lack of trust. These dependencies when combined with the business agility requirements lead to constraints. Some of these constraints are mere manifestation of dependencies themselves, whereas others are a direct result of multiple teams and limited resources. Here is a quick list of some of them:
  • Dependent service unavailability. This is the case when the dependent service is not implemented yet. This results in the downtime or forces downstream teams to built redundant components.
  • Resource unavailability. Multiple teams going after the same set of resources.
  • Time constraint. Dependent services and resources are available, but time-sliced to accommodate multiple teams
  • Intermittent availability even when dependent services are available. No SLA applies in the Dev/QA environment.
  • Changing behavior of the dependent service. This not only invalidates current workflows, but also makes the data brittle.
  • No control on the dependent service time line
If we take a closer look, we will see that these dependencies along with the constraints work against the overarching business goal, i.e. to be agile!

Therefore, in order to deliver SOA projects on-time and under budget, we must devise a process that will help eliminate these dependencies and constraints imposed by the side effects of loosely coupled architectures.

Saturday, April 14, 2007

Roles & Responsibilities of Developer vs. Tester in new SOA world

Both Developers and testers probably have tougher role now in SOA world as things change very fast. One huge monolithic project is now equal to n-smaller projects, which means n-dev teams and n-qa teams. N-number of smaller project consume more resources as compared to one-big project, but the benefit is faster release train and quick turnaround to market and customer requirement changes. Lets look at each of the benefits of SOA and how does it impact Dev vs. QA:
  • Smaller teams
    • DEV: More focused teams, faster development possible.
    • QA: Small Dev team means smaller QA teams.
  • Shorter release cycles has a lot of implications:
    • Quality can no longer be pushed to the end
    • IT needs to me more agile to handle faster production upgrades
    • QA can no longer take one month to do system tests
    • DEV must build quality into the code from day-one
    • Engineering best practices are more critical than ever
  • Interdependency between smaller projects
    • Smaller projects must be able to work with each other
    • Standards play a huge role
    • Separate QA effort is required to validate interdependency and high level system level business workflows
  • Many heterogeneous technologies
    • No more only UIs to test!
    • Developers must understand all new upcoming technologies.
    • QA needs to become more technical; Point-n-click of front-end Uis doesn't fly anymore
    • Test automation (or at least programmatic testing) is more critical than ever; More than 70% of exposed interfaces won’t have any UI
  • Dependency on outside components and services
    • Loosely coupled architectures allow projects to pick off-the-shelf services
    • Some of these services are enforced by government (like DRPL checks)
    • Dev and QA must understand the implication and incorporate 3rd party dependency into the overall strategy
    • QA must know where to stop; test integration, not the third party service!
  • Agile development and testing
    • Agile means ability to quickly change with changing environment.
    • To be agile, teams must be small and independent
    • SOA makes it possible for teams to be agile
    • Traditional waterfall processes don't work in SOA
    • Teams must understand the difference and adapt new process that enable SOA development
    • QA works more upstream with development inside sprints.
    • Continuous testing is the key!

Implication/Challenges in WS-Testing

  • The biggest challenge is people confuse WS-Testing with SOA-Testing. There may be 50% to 60% overlap, but they are different. WS-Testing is just a subset of SOA testing. Check following links:
    • http://www.developer.com/design/article.php/3588361
    • http://itko.blogspot.com/2007/03/big-ws-difference.html
  • QA teams are not used to testing non-UI type interfaces
  • Manual testing doesn't work anymore. QA must improve its skillset to survive in SOA world
  • SOA testing requires QA teams to have fairly good understanding of the underlying architecture
  • It requires QA teams to really understand the basics of different SOA technologies, some of which are: Web Services, EJB, JMS/ESB, REST, RMI, POJO, relational and hierarchical DB, .NET, etc.
  • QA teams are NOT used to managing test assets. They must follow engineering best practices.
    • Use of version control systems
    • Collaboration sites like wikis
  • SOA enables Agile development which requires QA to work more upstream with development
  • All of the basic test automation challenges still applies
    • Test management, test data management
    • Version control, sharing, nightly test runs
    • Automated reporting, result analysis, debugging
    • High reusability, lower maintenance, test mobility
  • Once teams understand the difference between traditional QA and SOA-Testing, the next challenge that they have in front of them is Test Data Management. Understanding all the data requirement and figuring out how this data will be fed into the test cases is the key.
    • http://almquality.blogspot.com/2006/08/data-driven-testing-ddt.html
    • http://almquality.blogspot.com/2006/08/data-strategy-ds.html
  • Lastly, the teams MUST follow establish some best practices.
    • Test Bed Setup - including server farms
    • Test Directory structure - project workspace
    • Naming conventions
    • Data Management,Test Libraries
    • Configurations,
    • Versioning
    • Reporting

Saturday, January 13, 2007

Key to Success !!

Do you know the difference between Martin and Frank? Why Martin lost and Frank was not fired in the board room?

Sure, there is no single formula to success, but there is definitely a common denominator... i.e. Energy. Do you meet people and wonder "Wow! how come this person has so much energy"? I am sure the answer is "yes" and I am also sure that you'd consider this person to be successful in most cases! If you have energy, you'll be successful in whatever you do. High energy people are optimistic and possess a "Can Do" attitude. Most of the pessimistic behaviors stems from the roots of lethargy!

So, the question is how to get energy? Is it the stamina, or is it the physical health? Is it the passion, or is it the correct diet? Is it the sleep or the caffeine? Different people are driven by different things. Following are some of the ways to get your energy boost:
  • Regular exercise (15 minutes a day)
  • Meditation
  • Music, Sports
  • Kids
  • Meeting new people
  • Reading articles and books
  • Attending seminars and shows
  • Reading biographies of other successful people
  • Setting clear goals and reviewing them regularly
  • Motivational movies
  • Caffeine, of course!
  • Just enough sleep
Pick the source whatever works for you and just make sure that you get your energy boost on regular bases and you'll find that you are climbing the stairs of success in no time.

Another point to remember is that once you get enough energy to kick-off your reactor inside, you'll also start radiating energy! People you meet will get energized by you (like induction) and they'll reflect that energy back.... It's wonderful to be among a group of very high energy people.

Caution: Like viruses, low energy people can deflate all your energy in no time. Make a conscious effort to stay away from couch potatoes and pessimistic people. Slowly develop a circle of only high energy people around you.

One easy way to find success is to follow it. So try to be with successful people as much as possible. Induction is a wonderful thing, but it works on the negative side too - so be careful!

So, why was Martin fired?!

Thursday, December 28, 2006

Test Automation Tools!

As a Quality Architect, I have been tasked with evaluating test automation tools for QA/QE and development more often than I can remember. There is always a fight between homegrown and commercial tools, i.e. a build vs. buy decision. I have always been an advocate of homegrown solutions, until recently. I have architected several sophisticated frameworks myself. In this blog, I uncover some of the very basic requirements of a test automation framework, which should help you with your evaluation or defining the requirements for your homegrown venture.

The complete requirement of a test automation framework can be captured in one line:
A tool that facilitates automating test scenarios and allows anyone to run them from anywhere and at anytime.
This means that automating tests should be easy and intuitive. Tests, once automated, should be able to run on any supported platform or operating system. And most importantly, anyone (QA, Development, Sustaining Team, or even customers, if required) should be able to run these automated tests in an unattended mode.

Appropriate logging and debugging mechanisms should be available to capture false negatives. The tool should provide a framework to test the core technologies that our SUT is built upon - SOA Web Services, .NET, EJBs, RMI, Web UI, Rich Clients, Command Line, SQL, Scripting, APIs, Raw SOAP, Proprietary XML and document formats, etc.

No tool can generate positive results if it does not take people and processes into account. Apart from the core test automation needs, a framework must also integrate with other existing tools in the ALM domain. For example, a test automation tool must integrate with test management system, which should integrate seamlessly with requirements management, defect tracking and other top-level dashboards. There is no one tool that can serve all our requirements and that is why it is very important to have open integration APIs available for customization. Continuous Integration and Agile testing is the new buzz these days. A framework must mesh well with cruise control, ant, maven and build repositories.

SCENARIO:
A test engineer or a developer automates a test and checks it into a version control system. Cruise control kicks of the nightly build and executes all the pre-deployment test cases. A provisioning system deploys the latest bits and kicks of the post-deployment automated tests. Test results are automatically pushed over to a central server, where they get mapped to the requirements. An email notification is generated with up-to-date report.

Next morning, the manager checks the email, clicks on the link, logs into the reporting system and gets the latest release readiness matrix with detailed drill-down test coverage and code coverage reports.

The company decides to ship automated tests with its product to its customers. Even in absence of the build workspace and central reporting server, customers are able to execute the automated tests and get the local report!
Above scenario captures majority of the requirements of a test automation framework. Some may think it is too extreme and for others some pieces of this scenario may not be applicable at all. But if you really think about it, this is the kind of infrastructure that is required to build high quality software applications. It is required for continuous integration and agile development & testing.

RECAP:

A test automation framework should (choose the ones applicable to you):
  • be platform/OS independent
  • provide detailed logging and debugging mechanisms
  • support SUT technologies
    • SOA Web Services, .NET, Raw SOAP
    • J2EE, EJBs, RMI, POJO
    • Command Line, Scripting
    • Web UI
    • Rich Client - Swing UI
    • Databases, SQL
    • Raw XML
    • Proprietary document and transport
  • be able to execute test cases in headless and batch mode
  • zero coding requirement - but still available for advanced users
  • version control friendly - no binary files to check-in
  • provide APIs/mechanisms to integrate with other ALM tools
    • integrate with development IDEs
    • integrate with build workspace
    • integrate with Continuous Integration tools
    • integrate with Code coverage tools
    • integrate with test management tools
  • provide detailed reporting (text, xml, html, etc) with APIs to customize and integrate
  • provide data driven capability - a must
  • provide distributed application support
  • be able to execute remote commands
  • ease of use, good documentation, training and support available!
Is there such a framework available in the market? I have worked with over 40 different commercial and open source tools and have not found even a single one that delivers even 25% of these requirements. That is why most of the companies revert to home grown solutions. I was also one of the advocates of building a home grown solution, until recently!

BREAKTHROUGH:

Recently, I came across LISA from iTKO. To my surprise, the tool is very impressive - much better than any other in the industry. LISA seems to deliver over 80% of above requirements - as if the company read my mind and captured all the requirements!! There are minor quirks (like all others), but the tool is built on pure java and XML, runs everywhere, provide open APIs for expansion and integration into anything! Amazing data driven capability and provides mechanism to automate complicated end-to-end scenarios. It is a dream tool for developers and test engineers! It allows you to plugin your own java code and mash-up with other technologies. The developers don't have to maintain a separate workspace for test cases - the test cases can be kicked off from the same build.xml file. XML reports and custom report generator can be used to integrate test results into anything.

Monday, September 25, 2006

Missed time to market window! Really?

We know project management is all about the juggling the three balls of time, cost and quality. A project is successfull if it meets the functional and non-functional requirements within predetermined time, cost and quality constraints.

The traditional project management approach (and hence 99% of the tools) focus on completing the defined work within given time constraints and cost limits. However, the recent focus has been shifting more to the quality of the final output!!

Let's look at some examples:
  • Google. Didn't google missed the time to market long before it released its search engine?
  • Apple iPOD. Had it made a difference if iPOD was delayed another 6 months?
  • Toyota Prius in 2000. Missed the TTM by three years! (Audi released its first hybrid in 1997 and Honda released its hybrid in 1999)
More and more companies are realising the fact that quality rocks!! If your product is high quality, it doesn't really matter if you are a year or two late to the market. Every product has its life, but if it is of high quality it tends to live longer - which changes the whole Net Present Value (NPV) calculation, in case you are using it to calculate the validity of your projects and releases.

Project requirements can be divided into functional and non-functional buckets. Functional requirements are the core (and supplementry) features of your product. Non-functional requirements are the systemic qualities, which encapsulates all "illities" - Availability, Scalability, Reliability, Flexibility, Extensibility, Interoperability, Compatibility, Testability, Understandability, Load and Performance, Stability, Resiliency, Manageability, Mantainability, Security, Supportability, Adaptability, Configurability and Usability! Note: Not all illities are applicable to all product offerings.

Your product may have over thousand functionalities, but just pick a handful of core ones (maybe 3 to 5) and all of the non-funtional requirements for your first release. A high quality product markets itself: word-of-mouth is the most effective marketing tool. Once a customer is hooked-in, slowly roll-out new features. That way you'll have the relationship going and you can get a continuous inflow of money - easy from SEC's perspective and no hassle of accounting manipulations either! That's what is driving software as a service (SaaS) market today.

SOA is the SaaS enabler and it is changing the way software is released. SOA brings business agility. However, our project management tools are still old-fashioned. Project managers are still focused on TTM and CTM concepts. They are still chasing deadlines and pennies. Quality awareness is forcing ALM companies to come up with more sophesticated tools that stitches the SOA fabric.

For innovation and quality, you are never late to the market!

Wednesday, September 20, 2006

Follow on: Revisiting the Definition of Software Quality

Interesting article and discussion on the definition of quality on StickyMinds.com! Article is dated back to 2001, but it is still very much relevent. Robert L. Glass has done a good job in defining what quality is and what it is not. As you can read through the comments, not everyone agree with his definition - as one would expect. Quality is a FAT word and can be interpreted in zillion of ways. It is therefore important that the project team agrees to one definition of quality and stick to it. Consistency is far more important than the definition itself.

ISO definition of Quality: The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs. (ISO 8402: 1986, 3.1)

This definition captures both funcional and non-functional requirements. And BTW, the official name of all "illities" is Systemic Qualities. And there are a lot more Systemic qualities than what Robert has mentioned - for instance - interoperabiliy, availability, scalability, etc.

Another point I disagree with Robert is that "customers/users must participate" in prioritizing and selecting "illities". Some of these systemic qualities are customer facing and other are company facing. For example, it is in companies best interest to make sure there is flexibilty in the code for future expansion and understandability is important to the company for maintenance purpose! Customer doesn't care if your code is moduler and your architecture is flexible. All he cares is the feature set he wants, when he wants it. Customer cares less about the business requirements. But when we talk about quality, all requirements come into picture:
  • customer requiremens
  • business requirements (capture market requirements and corporate requirements)
  • legal requirements
  • government requirements
  • social requirements
  • testability requirements
  • operations requirements
  • engineering requirements
I don't think we need to be in accordance with the customer on all these requirements!!

Another interesting topic that was raised in the article is whether quality can be quantified, given the definition by Robert Glass. I find it rather amusing because, I think, Quality can be defined and can even by quantified. Of cosurse, not everyone would agree with your definition and your way of quantifying it, but you can definetely do it. And as I said, consistency is far more important than the definition itself.

Read Quality Index (QI): Measure of Risk for more insight into how you can measure software quality

Friday, September 15, 2006

It all boils down to Metrics!!

Setting up a goal is one thing, but how do we know that we have achieved our goal? Software engineering is becoming more of an art than science. Success is a relative term! A project manager with exceptional artistic and articulative skills can sell a project, which is on road to failure, as a successful investment to the executives. In absence of real numbers, the darkness prevails. And under this darkness, all decisions lead to the path of failure.

Snapshot:
"We have a GA date approaching. PPM calls a Projet-Team meeting and takes a vote of confidence, which decides the fate of the software!!"

"A P1 bug is not a show-stopper if it already exists in production. The release will not degrade the production quality!!"

"QA gives a conditional GO with list of risks. By the time decision propogates to executives, the attachment is dropped and the Conditional-Go turns into a Sure-shot-Go!!"

"PM: The problem is not in our piece of the code. The issue is because of the other component that we are dependent on!!"

Sounds Familiar? Interesting, isn't it?

Let's face it, we need sophisticated tools that can generate real time metrics for anyone to make informative decisions. People often mix product quality with process quality. Even though a high quality process generates a high qualiy product (TQM principle), I believe the metrics for the two should be different. For example, higher percentage of test automation improves QA process quality and doesn't directly improve underlying product quality! Note: the automation of processes in the early ALM cycle would have a more direct impact on the product quality.

Here are the list of questions, that metrics should be able to answer:
  • Q1. What is the overall Quality Index (QI) of the product. QI for a particular feature or requirements? What is the QI of different components?
    • Consistency of the processes and measures is the key here.
    • It is easy to fabricate a QI model that concentrates on intrinsic product quality!
  • Q2. What's our Release Readiness? What's the risk if we release our product today?
  • Q3. What's the QI trend for differenet releases and different builds?
    • Trend is more important than actual QI snapshot.
    • Errors in the QI (if any) cancel out when you read trends
  • Q4. What's the Dependency Matrix (DM)? How does other SOA components impact my product? How does my product impact other offerings in the organization?
    • Current snapshot from QI perspective
    • Roadmap overlap for future releases. Cross-project Backlog Management.
  • Q5. What's the realtime Coverage graphs?
    • Test Coverage (test validating requirements)
    • Code Coverage (tests validating code)
  • Q6. Testing Strategy Automation.
    • When files A, B and C change, which features get impacted. What test cases and configurations should a test lead plan for next build?
  • Q7. Process Quality. How productive is my team?
    • Measures of test automation.
    • Comparisons with baseline (and manual testing)
With above metrics in hand, I can easily make statements like:
  • We are ready for the release!! Our product QI crossed 85% in the last build.
  • Because of TTM (time to market) pressures, we have decided to release our product with 65% QI. To mitigate the risk, we have also decided to increase our customer support resources.
  • Our product is not ready for GA because we have a dependency on products B, C and D, and product B has a QI of only 30%. Since B is tightly coupled with our core, we are not in a position to release our product.
  • I can effecively utilized my QA resources to concentrate on only the impacted features in a build. We don't have to regress every build every time. We can validate a build with handfull of fixes in less than two hours, and that too with over 95% confidence!!
  • We can now sell SLAs and QLAs around certan metrics because we have a consistent (and automated) way of capturing them.
  • I can trace a customer escalation all the way back to requirements, because we have end-to-end integration of ALM tools with excellent search facilities.