For our 4th coding dojo, we spent our time demonstrating various implementations of the Observer pattern as applied to the Tennis Kata problem. Susheel implemented a version where the players were subjects and the scoreboard was the observer. We had a spirited discussion around whether or not the scoreboard’s keeping a reference to the players was a violation of the observer pattern, and decided it was not. Chris implemented two versions where the players were held in a group class which was the subject and the scoreboard was the observer. One implementation used the Even/Delegate constructs provided by .NET and the other used the IObserver/IObserverable constructs used by the Reactive Extensions (Rx) in .NET.
Now that my eyes have been opened to these Reactive Extensions, I plan to look a little deeper into this topic. In terms of plans for our next dojo, we will look at the Gilded Rose Kata and creating effective test cases.
My thanks go out to Susheel and his contributions to the dojo today.
Anyhow, our Tennis Kata implementation turned out to be very challenging. We decided to implement the observer pattern with the players as the subjects and the scoreboard as the observer. We ended up thrashing on design for the majority of the dojo, which is okay as we took away these bits of wisdom:
1)We should spend more time understanding the patterns we plan to use prior to trying to use them.
2) We should follow a work pattern where we define our interfaces and object relations in concept, then write the interfaces in code and the object relations using unit tests. Allow the unit tests to bring the design flaws to the surface. This work pattern should work for our professional lives as well as it works in the dojo setting.
Between now and the next dojo, I will work on the use case where we treat Players in aggregate as a subject for a scoreboard observer (I will also look to leverage the .NET native implementation of the Observer pattern). Glynn will work on the use case where the Game is the subject for the Scoreboard observer. Susheel will work on the use case where the Players individually are subjects for the Scoreboard observer.
My thanks go out to Altaf, Susheel, and Glynn for their contributions to making this dojo a success.
Our main takeaway was that the refactoring worked because it created code that was self documenting and much easier to ramp up on than what we started with. Also, we found that it was valuable to keep refactoring threads of the person ahead of you going when it was your turn at the keyboard, refactoring in small increments set us up for better success than large scale refactoring, and it is best to run tests constantly.
As a side note, our reward for meeting so early was a beautiful sunrise behind Houston’s downtown skyline.
My thanks go out to Altaf, Raj, and Susheel for their contributions to making this dojo a success.
We held our first coding dojo this week and worked through the C# version of the Tennis Refactoring Kata. We followed the Randori method and the agenda scripted out by Emily Bache in her book, The Coding Dojo Handbook, and it worked really well. I can definitely recommend her book and Pluralsight course on this topic.
Our main takeaways were regarding the importance of unit testing to conducting refactoring, the power and value of renaming methods and variables with little or no risk, and we picked up some nifty keyboard short cuts from each other.
The session went so well, that we are going to continue meeting weekly working through katas that improve our refactoring skills until we feel comfortable refactoring legacy production code at will.
Many thanks go out to Glynn, Altaf, and Raj for making this first dojo gathering a success.
Code Review: Should the results of a code review count against you?
“Code reviews often start off on the wrong foot because they are perceived as an unnecessary step that has been forced upon the developers or, in some cases, evidence that management doesn’t trust the developers. Neither of these perspectives is accurate. Code reviews are a proven, effective way to minimize defects. Whatever additional motivations the organization has for performing code reviews, they are, at their core, an industry best practice.
– Robert Bogue
Catching defects this early in the software development process saves the company money and results in a much higher quality product. A peer review should never be associated with lack of knowledge or confidence, but with openness and a willingness to learn best practices as a group.
Code Review: How do I prep for a peer code review?
You should create a self-checklist with common mistakes that you have made in the past and use this to help remind you of things to look for in your own code. Try to keep your checklist below 20 items. Additionally, you should run whatever code analysis tools you have at your disposal.
“… we believe that requiring preparation will cause anyone to be more careful, rethink their logic, and write better code overall.”
– Jason Cohen
Your self-checklist is your own private checklist. This is not the same thing as keeping a general “top 10” list of things to look for across the organization.
In regards to code analysis tools, you should run the “Analyze Solution for Code Clones”, run the “Analyze Code Coverage” on your unit tests, clear all code analysis issues detected by Resharper (or run Code Analysis on your code using Visual Studio if Resharper is not installed), and make sure all build warnings have been cleared.
Code Review: What do you review in a peer code review?
All changes made should be reviewed in a code review; however, it is normal to focus in on key areas. It is okay for the author to give some hints at something he wants the reviewer to specifically focus on.
“We look at everything, but generally focus on the more interesting areas. What’s interesting? Whatever jumps out at us when we’re looking at the code, or anything the author specifically wants to focus on. Our goal isn’t necessarily to put a microscope to every single line of code.”
– Jeff Atwood
Code Review: How long is a peer code review?
The code review process has been researched and studied since 1976, and it has be established and re-established that the maximum product code inspection rate is 200 lines of code per hour.
“This is direct and conclusive evidence that reviews should be limited to around one hour, not to exceed two hours.”
– Jason Cohen
I’ve observed that the average developer creates around 100 lines of code per day in an agile environment. So I expect the average initial review in a peer review to last about 30 minutes. Now since there is usually a cycle of inspection, clarification, and rework, the end to end process of a peer review will vary based on the amount of back and forth clarification communication and the amount of rework that needs to be done. Teams should plan accordingly.
Code Review: When do you do a peer code review?
A peer code review can be done prior to check-in or after check-in.
“I find most people come down strongly on one side or another; and since we make a code review tool, this is a point of contention for a lot of our customers.”
– Jason Cohen
I advocate doing pre-commit reviews.
By doing the code reviews prior to check-in, you avoid the scenario of a developer repeating or growing an issue that could have been caught during a review process. Although a pre-commit review can create blocking/delay issues, this can also be seen as beneficial as it provides some incentive to get the reviews done quickly and avoid eternally open code reviews.
Post-commit reviews can make it slower and harder to fix bugs, because the reviews will tend to come later when the topic is not as front of mind. Also, post-commit reviews can lead to batches of code that are too big to review.
Code Review: Does the reviewee have to be present at a peer code review?
A peer code review can be done shoulder to shoulder or pass around via e-mail/TFS. Over the shoulder reviews do require the reviewee to be present and have the benefit of greater information sharing and learning. The drawback is that they can result in scheduling challenges and disruptions.
Pass around code reviews don’t require the reviewee to be present, and thus don’t have the scheduling challenges that shoulder to shoulder reviews have. Also, pass around reviews provide a record of the review without additional overhead.
“Tool assisted reviews strike a balance between time invested and ease of implementation.”
– Jason Cohen
I would recommend performing 80% of code reviews via a tool assisted pass around style of process like the one provided by TFS 2012. The remaining 20% of code reviews should still be tool assisted, but doing the review side by side. Doing this type of mix should maintain the balance between optimal use of time and maximizing learning opportunities.