Purpose

"Sharing is the best way of Learning" - Unknown

Wednesday, November 27, 2019

Disruptive Technology - Role of scrum master


Context

The disruptive technology is a technological innovation that creates a new market by disrupting or destroying the existing market and value.

Not all technologies or innovations are disruptive or revolutionary. Some of the technologies take time or additional improvements to cause a disruption.

Though the word disruptive technology is coined and popularized in 1995, the disruption of continuity was always there. Sometimes it was visible and sometime it is a surprise

Consider early days of transportation, the road transport of goods is mainly by animal driven carts and personal transport is by horses.  The invention of internal combustion engines in 1859 by Ettienne Lenoir and perfection of this by Nikolaus Otto in 1876 were potential disruptive technological evolution. The transport industry was not disrupted immediately.

The subsequent invention of Car by German inventor Karl Benz in 1886 (10 years later) too did not disrupt the industry. It was in 1908 when Ford Motors revolutionized the production of cars that transport industry was completely disrupted. It took 50 years and a series of technologies to disrupt the industry.

Some of the recent disruption would be in postal services caused by E-Mail; Computers in banking sector; photographs to digital photography; Mainframes to mini computers to personal computers and now to smartphones and tablets the pace at which disruption is occurring is ever increasing. 

It does not take fifty years to disrupt an industry now. It would be 2-5 years at maximum. If you put this in generations the lost generation saw one disruption in their life time whereas Gen Y the Millennial would be seeing at least 10 to 15 disruptions in their life time

Generation scale


Software industry is no less disruptive. It has several emergent technologies that can disrupt the current software product. Some of them are Machine learning, Cloud, SAAS etc that could disrupt the software product that we are managing.

Process changes are also disruptive. For example scrum process vs water fall method of software development process.

Managing a disruptive technology
Awareness is the key to manage disruption. Most of the time, the products disappear because they are caught off guard when an existing technology manifest itself as a disruptive technology. Understanding the potential of existing technologies, tracking its evolution of these technologies are some of things that can be done proactively

When a potential disruptive technology is identified, we should either
  1. Accept
  2. Defend
  3. Counter with similar technology

Accept is simplest of all strategies, in terms of investment. When a potential disruptive technology is identified ensure that the organization adopts this technology. This strategy is mostly a low risk proposition.

Defend is a strategy that is best when the cost of disruption is minimal. Stakes are low. This will only delay the disruption but not avoid disruption completely.

Counter is the strategy that can be adopted when you know there is an alternative technology which is also promising and easy to adopt. Investment is high in this strategy and risk is also high.

Role of scrum master    
It is essential that everyone plays an active role in managing a disruptive technology. As a scrum master, the role is to ensure the team members be aware of emerging technologies and be trained in the technologies.
He should ensure that requirements that are coming in are also looked at from prism of emerging technologies.

Incremental Learning
Scrum teams should not be off guard when organization adopts a strategy to manage a potential disruptive technology.
The cost of training ahead is much lower than cost of training or hiring when a strategy is adopted. There is an impact of cost of lost time and a potential fear of late start which can jeopardize the organization strategy.

Scrum master should encourage the team to explore the emerging technologies within a sprint. He/she should timebox some effort for this training, exploring emergent technologies, discussing/analyzing them in the context of current requirements.

Scrum master should allocate 8 hours per scrum member in a 4 weeks sprint. This seems to be a huge cost but in long run it will be beneficial to organization and individual members.

Currently if we consider 1 weeks sprint we have following structure of 1 week sprint.

Current 1 Week Sprint 

Scrum master should encourage the team to train 2 hours in emerging technologies in a 1 week sprint.
In backlog grooming meetings scrum master should encourage the team to look at the current requirements from the prism of emergent technologies. It is not necessary that the technology is adopted but discuss the possibility of different approach.

Proposed 1 Week Sprint Overview

Prism of emergent technologies
When the team meets for backlog grooming,  after securing the backlog, reserve the last 10-15 mins to discuss the backlog from the prism of emergent technologies.

The team should understand that this discuss has no immediate impact on the backlog that is groomed but is to encourage evaluation and validation of current requirement with future emergent technologies.

Team should try to answer some of the following questions
  1. Is there a way to achieve similar or higher value using any other approach or technology?
  2. How is the current value be improved?
  3. Is the current requirement valid if an emergent technology disrupts?  




More Reading

Sunday, October 6, 2019

My adventures with improving test automation


I want to share some thoughts and experiences that I worked on for over a year now - "Improving test automation in our product". It has be a great experience, learning and very challenging one for me.
Before I move on to my next adventure, I would like to pause, look back, retrospect and document my experiences and learning. 
Background
     Two years ago (2017) as part of an organizational goal, we had to improve our automation. Our product is a huge enterprise product with approximately 2.5 million lines of Java code (not counting Java script, angularJs, JSP, HTML, XML lines). The product has a code coverage on higher side of 30% with about approximately 90K tests. It had a good mix of unit tests, integration tests and selenium tests

Though we started writing tests for the new code where ever possible, It is the majority of critical old code that is not covered by any automation.

Goal
   - Our goal was to increase code coverage in critical areas by 8% in FY19.

Challenges:
While driving this effort, we faced several challenges.
    - Managers/team members were not convinced that unit tests for legacy code is possible or of any value.
    - Some generic questions/comments from my peer managers have been
  • What is the ROI of this effort?
  • Are we investing in right area?
  • How will it add value to customer?
  • Is the code coverage right metrics for judging effectiveness of automation?
  • With the effort spent on automation, I could have fixed more customer reported issues.
  • I have already spent improving code coverage in my area, but there are still large number of customer reported issues
  • My area has lot of automation tests and the maintenance cost of these automation is  huge. - we doubt if increasing tests an effective strategy
  • For more effective test automation, it should be end to end testing or UI based testing involving customer scenarios and it should be QA Engineers problem

    - Many developers are problem solvers and are not truely extreme programmers - equipped with test driven development - Automation tests/testable code is always an after thought

    - Some code areas are no go areas. No one dares to touch it and hence automation has always been on back burner.


Challenges can be broadly categorized into two
   1. Mindset problem
   2. Gap in skill set

Execution
Overcoming the mindset
   To be frank, though I have tried to reason with many managers about importance of focused effort of increasing automation and code coverage, It was very difficult to convince managers. The conclusion of all our private talks was, our product is different. It is legacy code and the effort is not worth spending.

Here are some of my views about this effort.

It is not worth spending the effort on automation for legacy product. It can be effective for new product
I don’t agree with this view. If the product is almost dead and/or is living on life support ( is on maintenance revenue,  the customers are not upgrading to new products or version) then it could be a case where we need to evaluate if we want to spend on such effort.

For a product that is  growing, such an effort can give the product a new lease of life. The effort can reduce the cost of testing by many folds. With greater automation, developer can get faster feedback of newly added feature which in turn will reduce the development cycles. A unit test help developers refactor and clean the code.

Is code coverage a right metrics for effective automation?
   Yes code coverage could be deceiving for a product like ours. Let's take an example, lets' say, I have 100 lines of code in my module and the module has automation tests that covers 90 lines of the code.
If the customer majorly uses features that are part of uncovered 10 lines, my 90% code coverage has no meaning. (In other words these 90 lines should be relooked at if they are really required. It could be that they are not needed in the first place)

In short mindless code coverage improvement is worthless. You would not achieve value.
 Before we start this kind of effort, we need to analyse the areas within the product that are critical and needs coverage.
   Some strategies could be where
  • There are more customer reported issues
  • The code is frequently changing
  • Critical "No Go" areas
  • Difficult to understand code.  

    Code coverage is one of output variable - "metric" which indicates that we are in right direction and at right pace.

The other outputs of effective automation are reduced customer reported issues, refactorability of code, increase developers productivity. These variables can only be measured in long run. We cannot measure these in a short term. For short term measure to guide us through this effort, code coverage seems to be the right metric.

It is an involved effort. What is the ROI of this effort? How do we measure the effectiveness of this effort at the end of the year?
    It definitely an involved effort. But once it gets rolling it becomes easier.
    The ROI of automation effort would be to reduced feature regressions and reopen or rework in long run. In short run it is improved code coverage.
To see the benefits we need to reach certain point of code coverage. – A tipping point. In my opinion it is at 60% of your code.
There are several input variable to achieve this
  • Focused and strategical automation efforts. No point in automating those areas where the footfall is less or rare.
  • Writing Unit test which will enable refactoring of code.
  • Covering customer or internal QA filed issues with automation (These are missed use cases)
  • Removal of dead piece of code. This can be easily identified after writing some unit tests.

It is legacy code. What is the point of automating to improve code coverage?
  More so, it needs automation. Tests also work as documentation which help us in future refactoring.
Those modules in functional areas which are currently stable, critical but don’t have automation test are good candidates to be automated first.

We lose out on new product development capacity?
Initially those teams who have less code coverage and spent less time in automation will have a pinch.
Once the teams mature and start a habit of coding with TTD, the new product development effort will increase, build breaks and rework will reduce.

Such focused effort may not go down well with developers?
The increase automation effort should not be projected as dirty work. It is managers duty to ensure it is projected in right way. Few of my thoughts here.
  • If you recall the progression of developers over years, they began with procedural programming, then to OOPS then to extreme programming. When we are taking this effort we are asking our developers to move on to become an extreme programmer. - Simple, effective, refactorable, testable coding.
A way to start is change the mind set of developers from fixing problems to writing testable code.

TDD works only for new code and new products?
May not be true. The first step in TDD is to write tests and keep refactoring. This works for old legacy code too.
  • It is managers responsibility to make this effort interesting. One interesting way is to spend whole team 1 day to automate. May be first day of the sprint.
    
What is the point of writing tests which have no meaning but only improves code coverage?
   Yes agreed. This is where teams should have strategy to go thru the customer filed issues and other available data to automate the critical modules and critical use cases. Automate those areas so that you would secure them from future regressions.

After all these explanation and reasoning , there are managers who still have problems with this effort.
Yes the benefits of this effort seems very less and invisible in short term. But the benefits of these if implemented correctly in long run will be huge.  

Gap in skill set
Writing unit tests for legacy code is a big challenge. Sometimes it is difficult for developers to write unit tests for legacy code without refactoring and refactoring cannot be done with confidence without the tests backing it. A vicious circle which needs to be broken by writing tests…

Developers need to be skilled in writing unit tests with mocks. There is a learning curve for some developers and managers need to understand this. The code reviewers have to be cautious in this phase as the unit tests sometimes can be  too shallow.

A pair programming with a senior developer will help learning writing tests quickly.
Coding forums will help propagate best practices of writing tests across the teams.

Our Plan for increasing automation

Increase in code coverage and improving effective automation can be achieved by following
  1. Adding more tests
  2. Refactoring tests and code
  3. Removing dead code

Adding more tests
Adding more tests should not be arbitrary. You need to identify which tests are apt.

Automation Strategy: 
Test automation pyramid pattern of test automation are ideal pattern for teams to reach. This pattern should be ultimate goal to reach for developing team.


Why automation pyramid?
If we look at the cost of test automation we have cost of creation, cost of execution and cost of maintenance.

The cost of automation depends on type of testing, availability of test frameworks and tools. There are good tools available for automation like junit, jsunit, jasmine, selenium. Assuming that every team has good tools and framework the cost of test automation mainly varies on type of tests.
 
  


Every time the code needs to be changed the automation tests need to be run

As the code keeps changing with more feature additions, there is an amount of maintenance cost associated with automation tests.  Test automation pyramid achieve more code coverage with optimal maintenance cost. 

To get the team to the best patterns of test automation, you need to understand where the team currently is.
Identify the teams with no automation, heavy on unit tests, heavy on integration tests or heavy on UI tests.
The goal of the module is to get a right automation pattern either by writing unit tests, integration tests and selenium tests  

Refactoring code and Remove dead code
As developers we sometimes believe that adding more code, more features are important and forget that clean-up is as important as adding new features. This effort gave us opportunity to clean up or refactor some code.


Conclusion
Though we have completed this exercise and met our yearly goal of improving code coverage, some of the "Nay Sayers" are still not convinced that the effort was wisely spent. The short term code coverage shows an improvement, but we don’t have reported metrics of increased developer and QA members productivity nor we have metrics to show there is reduction of customer issues.
The immediate change that I could see is a cultural change. The developers now believe that code written/changed has to be testable.


Some nice reads that helped me in the process

https://martinfowler.com/articles/practical-test-pyramid.html
https://abstracta.us/blog/test-automation/best-testing-practices-agile-teams-automation-pyramid/
https://www.guru99.com/code-coverage.html