10 Tips for Adopting Agile in the Enterprise

A few months back I wrote about the challenges facing agile adoption in the enterprise. I got a lot of requests to outline some tips to overcome some of these challenges, which I have addressed in a follow-up post on AgileScout. The key points are:

  1. Get management buy-in.
  2. Plan for entire releases, not just one sprint.
  3. Plan sprints with specialized/shared resources in mind.
  4. Complex inter-dependencies are a reality – deal with it!
  5. Keep reasonable sprint lengths – at 3 or 4 weeks.
  6. Don’t expect all sprint deliverables to be production ready.
  7. Define “done” consistently across the teams.
  8. Reserve at least two “hardening” sprints.
  9. Be pragmatic about usable design documentation.
  10. Adopt continuous integration principles.
Agile Development Methodology

Agile Development Methodology

Read the entire article here.

UPDATE: To clarify, the tips above specifically address the challenges put forth in the earlier post around enterprise application integration projects. For other enterprise projects, vanilla Scrum approach may work fine.

5 Performance Testing Considerations for Application Integrations

Performance Testing

Image via Flickr

Enterprise integrations are complex, both functionally, due to implementation of a business process; and technically, due to introduction of one or more runtime layers between applications. Since these integrations typically represent end-to-end business flows, developers need to ensure that the performance meets the business need.

Here are some considerations when planning for performance testing of service oriented architecture (SOA) projects that integrate enterprise applications, such as Oracle’s Application Integration Architecture (AIA).

Update April 21, 2011: AIA specific tuning details can be found in Chapter 28 of the Developer’s Guide for AIA 11gR1 (E17364-02).

1. Define the End Goal. Clearly.

It may sound obvious, but it is the main cause of performance testing efforts going awry – lack of a clear end goal.

Note: “make it run faster” does not count as a clear goal!

Quantify desired metrics in an objective manner by setting Key Performance Indicators (KPI). Here are some KPIs you may want to check for:

  • Throughput of the end-to-end business flow by users, payload size, volume
  • Response Time for the end-to-end business flow by users, payload size, volume
  • Throughput of integration layer only (legacy application interactions stubbed out)
  • Response of integration layer only (legacy application interactions stubbed out)

2. Use Metrics Relevant to the Business

System performance KPI should be derived from business metrics so that it involves both business and IT. This results in a more realistic goal than arbitrary benchmarks set by developers or vendors. For example, the throughput KPI could be derived based on a formula that uses software cost and peak order volume to result in a “minimum orders per CPU core per minute” indicator that satisfies the business needs.

When looking at transactions, always consider “peak” spikes vs the average. For example, orders coming in usually have peak periods (e.g. holiday season sales), wherein the system will be subject to transaction load that is a magnitude higher than on non-peak times. Defining KPIs based on peak transaction volumes will not only help in setting realistic goals, but ensures true success of the project when it actually handles the load when it is most needed by the business.

Finally, don’t try to boil the ocean – identify a subset of the integration use cases which are prone to performance bottlenecks and meet all the KPIs before attempting other ones.

3. Do you REALLY Need Production Grade Hardware for Testing?

Using dedicated hardware is always better than sharing existing development or QA environments. However, every business has different needs with their enterprise applications and even this changes by business process. For example, an order-to-cash process may have a need for consistently high target performance metrics with medium-high load; as compared to the financial close process, which may need it once every quarter with high load.

Instead of buying or configuring hardware that necessarily matches every possible target scenario, consider the use of commodity hardware with matching “normalized” KPIs that are downsized from the target business scenario. For example, say the production hardware uses a given compute unit (CPU/memory/cache specification); and the commodity hardware is determined to be one-fourth the compute unit. If the business KPI target is 40 orders/CPU core/minute on the production grade hardware, then the internal, normalized KPI would be one-fourth of that i.e. performance testing would need to achieve 10 orders/CPU core/minute on the commodity hardware to be considered successful.

Of course, the benchmark may not scale as linearly, but this can be easily factored into the equation, providing a good educated estimate of the integration performance. Compared to the alternative of not testing due to hardware unavailability and discovering issues in production, use of commodity hardware and normalized KPIs can be a very viable performance testing approach.

4. Choose a Consistent Testing Strategy

For integration scenarios, a bottoms up testing strategy may be useful to consider, i.e. optimize a single use case fully (to reach desired KPIs) before introducing additional artifacts or flows.

Plan on the sequencing of the use cases appropriately, which can save some cycles e.g. between a Query and an Insert use case, the Query may look simpler, but it needs data which can anyway be seeded by the Insert use case, so it may make sense to proceed with Insert first. Also, identify the “data profiles” for the use cases and create representative sample data e.g. B2B orders may have 50-100 lines per order whereas B2C orders may only have 4-5 lines/order.

For each use case, once KPIs are met with for a particular number of users, payload size etc., run longevity tests for at least 24 hours to ensure that the flow does not have memory leaks or other issues. Check the desired metrics e.g. JVM garbage collection, database AWR reports etc. and purge data after each run to ensure consistency between tests.

When the above passes, gradually increase number of users and increase payload on the same use case to identify system limitations when under load. Once the specific use case is optimized to KPI for concurrent users / payload, add new flows to the mix and tune.

While the above may again seem obvious, the temptation to “switch gears” when one use case is not fully working can cause a lot of overhead in switching context by the project teams and setting up data for the new use case etc. It is better to complete one full use case successfully before targeting others.

5. What about Standalone Testing for Integrations?

Standalone testing – stubbing out enterprise applications – is useful strategy to identify integration hotspots and remove the unknowns of the enterprise application performance from the integration scenario. However, be aware that it will not identify all performance issues. Developing stubs requires substantial investment to emulate the edge applications and may be non-trivial for enterprise applications that typically have complex setups. Furthermore, some integration settings on the SOA server will typically change when the applications are introduced, so avoid over-tuning the solution when performing standalone integration testing.

Performance testing and tuning is still somewhat of an art that requires a good understanding of the technologies, its limitations, and all the available tuning “knobs” in each technology to achieve the KPI requirements of the integration flow. At the same time, the non-technical, project related aspects of the testing exercise is also essential to the success of the initiative as a whole.

3 Reasons Agile Faces Challenges in the Enterprise

AgileScout invited me to write a guest post on the use of Agile in the enterprise. Having worked and being involved in multiple projects of varying complexities, I found adopting Agile (specifically Scrum) was challenging in many ways for all but the simplest projects. Most challenges could be overcome by modifying the methodology or adopting alternatives such as Kanban or “Scrum-ban“, but this is a practice that usually raises eyebrows in the Scrum community.

There are three areas that are challenging for Agile in the enterprise:

1. Complex inter-dependencies between projects – a reality in any enterprise

2. Handling of Specialized and Global Project Resources such as expert architects in geographically distributed teams

3. Sprint Overhead caused by complex project tasks such as initial architecture design, that would typically not fit in any reasonable sprint duration.

Read the entire article here.

UPDATE 05/04/2011: Followup post on tips for adopting agile in the enterprise. Full article on AgileScout.