6 Considerations for Mobile Device Testing
Presented by: Jacob Stevens, Senior Test Lead at Quardev, Inc. and Matt Pina, Manager of IT Applications Quality Assurance at Alaska Airlines
New to mobile? Unsure how the unique dynamics and constraints of this lucrative platform might shape your test design? This primer to testing on mobile devices introduces the aspects of any mobile test scenario that may warrant consideration for your testing. We’ll look at the 6 key considerations to help you ensure thoughtful design and proper coverage, from carrier issues to the myriad hardware configurations.
About our speakers:
Jacob Stevens is a Senior Test Lead at Quardev, Inc. and a ten year QA veteran with over 40 industry leading clients. The scope and scale of the projects, and the types of platforms, technologies and methodologies he’s worked with have been widely varied. Jacob studied under Jon Bach to adopt a context-driven approach to test design. One of Jacob’s favorite topics in QA is epistemology. How do we know that our test results are accurate? How do we ensure inherent biases in our test execution methodologies do not manifest in false positives or false negatives? Jacob enjoys talking technology and many other subjects on Twitter @jacobstevens. Jacob is also a little uncomfortable writing about himself in the third person.
Matt Pina is the Manager of IT Applications Quality Assurance at Alaska Airlines, Inc. He is an IT professional with over 25 years in the industry. His passion is to create Quality Models that adapt to multiple platforms. He currently leads the Alaska QA Center of Excellence with project work supporting all platforms, but primarily focusing on Mobile and Web applications. Matt is a graduate of Central Washington University School of Business, holding a B.S in Business Administration/Finance. Additionally he holds several computing certifications from various local universities.
Testing in Production: Your Key to Engaging Customers
Presented by: Seth Eliot, Senior Test Manager at Microsoft, Bing
Feature lists do not drive customer attachment; meeting key needs does. Learn how Testing in Production (TiP) works synergistically with customer scenarios to put real product in front of real users in their own environment to get direct, actionable feedback from users and uncover features that meet their key needs. TiP also allows us to quantify the impact of these features which is crucial since evidence shows that more than half of the ideas that we think will improve the user experience actually fail to do so-and some actually make it worse.
Learn about tools and techniques like Online Experimentation, Data Mining, and Exposure Control that you can use to Test in Production (TiP) to get direct, actionable feedback from real users and learn what works and what doesn’t. Of course production can be a dangerous place to test, so these techniques must also limit any potential negative impact on users. Hear several examples from within Microsoft, as well as Amazon.com, and others to show how Testing in Production with real users will enable you to realize the better software quality.
About our speaker: Seth Eliot is Senior Test Manager at Microsoft where his team solves Exabyte storage and data processing challenges for Bing. Previously he was Test Manager for the Microsoft Experimentation Platform (http://exp-platform.com) which enables developers to innovate by testing new ideas quickly with real users “in production”. Testing in Production (TiP), software processes, cloud computing, and other topics are ruminated upon at Seth’s blog at http://bit.ly/seth_qa. Prior to Microsoft, Seth applied his experience at delivering high quality software services at Amazon.com where he led the Digital QA team to release Amazon MP3 download, Amazon Video on Demand Streaming, and support systems for Kindle.
Presentation Slide Deck: Testing in Production
To Test or Not To Test
Presented by: Adam Yuret, Sr. Test Engineer, Volunteermatch.com
Ostensibly the goal of testing is to provide test coverage of a software product to uncover and document bugs. What if a stakeholder doesn’t want you to report bugs? What if they want you to test less? How would you work to implement a new testing process to a team transitioning to a new methodology without over-reaching your mandate as a tester? now, add to that the additional challenge of never having met the team in person.
Let’s discuss scenarios where the tester is explicitly asked to ignore most bugs, not because the product is so polished that the only probable defects are minor, but because the opposite is true. There are so many problems that to document them all and act on them would have a crippling effect on the project. What would you do in this scenario? Come join Adam Yuret and hear how he’s handled this type of dilemma in current, and past projects. Likewise, share your contexts and ways in which you may have faced these challenges.
About our speaker: After 8 years at WebTrends testing an enterprise level SaaS data warehousing product which included building and maintaining a large scale testing environment, Adam currently works as an “army of one” tester for Volunteermatch. VolunteerMatch is a national nonprofit organization dedicated to strengthening communities by making it easier for good people and good causes to connect. Adam is a relative newcomer to the context driven community and is currently working to build a testing process for a project that is transitioning to an agile/scrum methodology.
Presentation Slide Deck: To Test or Not To Test Slide Deck
Teaching the New Software Testers – an Experiment
The many facets of what it meant to introduce a software testing course into UW Bothell – professor and students give an experience report and look to the future
Presented by David Socha, Assistant Professor, University of Washington, Bothell
As a new professor at the University of Washington Bothell I decided to put together a new course on Software Testing – a course that had never been taught here, and that I had never taught. Nor had I ever explicitly acted in the role of software tester during my 19 years working in the software industry.
Why do such a crazy thing? How did the course turn out? What were the good and the bad, the memorable and the inspiring from this course? How did it change the students, and the instructor? What did it “mean” for this course to be taught at the University of Washington Bothell? What might this “mean” for the software testing community in this area?
If you are interested in any of these questions, or might want to be involved in the co-design and execution of such a course next year, come hear David Socha and some of his students give an experience report on this autumn quarter 2010 course at the University of Washington Bothell.
About our speaker: As of September, 2010 David Socha is an Assistant Professor at the University of Washington Bothell in the Computing and Software Systems program. He grew up on a farm in Wisconsin, and in the process became eco-literate (e.g. a systems thinker). This deepened as an undergraduate at the University of Wisconsin, Madison where he made his own luck and spent a year as a field assistant in the Galapagos Islands as a research assistant studying the Galapagos land iguanas, and then earned a BS in Zoology. After earning a PhD in Computer Science and Engineering at the University of Washington, Seattle, David spent 19 years working in a variety of software development organizations moving from being an individual contributor, to managing groups of software developers, to being an agile coach. Recognizing the Teacher in him, he now is a tenure-track professor at UW Bothell working to make a difference. His areas of interest focus on the human side of software development, innovation, design, biomimicry… and perhaps even software testing.
Presentation Slide Deck: Teaching the New Software Testers
My Crazy Plan to Respond to Change
Presented by Jon Bach, Manager for Corporate Intellect, Quardev, Inc.
During a given workday, we either focus on testing activities or we have to focus on the things that interrupt testing activities. Some of us lament the interruptions, others find it energizing because it’s consistent with notions of Agile – like responding to change over following a plan. I’ve got an idea on how to use interruptions to focus on testing activities. Multi-tasking is not evil, nor does it have to torpedo your project, but it may require a management technique to help you stay sane. Enter Thread-Based Test Management — a way to let interruptions guide you to where you’re needed most. Using a kanban-style dashboard is one way to go with the flow without losing your way. Using a spreadsheet with rows to track topics you followed in a day is another. In this talk, Jon Bach describes his experiences with a new technique he and his brother James invented that might help you get rid of the shame of feeling undisciplined for letting yourself be interrupted while being more response-able to the important things that need your time and attention.
About our speaker: Jon has over 15 years of experience, including Fortune 500 and start-up companies, serving in the Quality Assistance and Software Testing domain. He is a co-author of a Microsoft Patterns and Practices book on Acceptance Testing (2010) and is an award-winning keynote speaker for major testing conferences (STAR, QAI, BCS SIGIST). He has served as the vice president of conferences for the Association for Software Testing; invented a method for managing and measuring exploratory testing; a method for coaching testers (Open-Book Testing) and categorizing risk (Color-Aided Test Design) and has published over 50 articles, whitepapers, and presentations about the notions of value of exploration and rapid, risk-based testing. He has been with Quardev for the past 6 years and serves as the speaker chairman for the acclaimed Seattle non-profit QASIG testing forum.
Beyond Six Sigma: Agile/Lean/Scrum/Xp/Kanban for physical manufacturing – building a 100 MPG road car in 3 months
Presented by Joe Justice, a Seattle area lean software consultant and entrepreneur.
Our September speaker, Joe Justice, presented an experience report about Taking on the Progressive Insurance Automotive X Prize using Lean software techniques – this report included how Joe and his team did it, pictures and information about the car, the methods, methodology, and secrets, questions and next steps, and the future of this approach.
About our speaker: Joe Justice, a Seattle area lean software consultant and entrepreneur, ported lean software practices back to manufacturing in order to compete in the Progressive Insurance Automotive X Prize, a $10,000,000.00 race for 100 MPG cars built to road legal requirements. Joe will walk through the take-aways, wins, and lessons learned when building a ultra-fuel-efficient hyper car using TDD, Scrum, and pair development.
Presented by 8-10 of your fellow QASIG participants
The format was the “lightning talk” where each of the speakers knew that they only had five minutes to present their idea (one at a time). Each speaker had a severely limited time-frame, but could speak about anything related to testing in that time – a new idea for status reporting or measurement, a lesson they learned on a project, an experience report about a new technique they tried, or a general test principle they want to share.
Lightning Talk Speakers, Topics, and Slides (where submitted):
|Jim Hancock, Director of Quality, IT and Customer Delivery at Geospiza
||Creating a Quality Index
||Most software development has become “agile,” but CMM, CobiT, PMBOK, Malcolm Baldridge and other quality process metric deployments are still large and complicated. Jim will discuss how to deploy software quality metrics that meet the following criteria: objective, non-obtrusive, flexible, repeatable, consistent, holistic, efficient and inexpensive. Most importantly, the Quality Index will provide an accurate assessment of risk and quality within any SDLC upon which logical deployment decisions can be made.
|Jeremy Lightsmith, Seasoned Agile Coach, Trainer and Facilitator
||What’s Your Conflict Management Style?
||There is a model of the way people respond to conflict that looks like the attached picture. I thought it would be cool to introduce it and talk about why it’s helpful and how to use it. There’s also a related questionnaire that I can leave for everyone to see how they score.
|Jeff Brewster, Grameen Foundation
||Untethering a Test Lab
||Do you have a performance test lab that can only be used onsite or has expensive software? Been forced to share lab resources due to cost or space concerns? We’ll share the steps the Mifos testing team used to move their performance test lab to the cloud.
|Tim Tapping, SDET, Expedia
||How to sprint as Agile/Lean QA on the feature team
||QA has challenges if we are used to building out test plans from extensive documentation which is not present in a agile/lean development environments. We need to engage early in the iteration and help the team understand that ‘quality deliverables’ are a team activity. Drawing on my recent experience at an acquired startup which was an agile shop, I can share techniques that worked for us and pitfalls to avoid.
|Jon Bach, Manager for Corporate Intellect, Quardev, Inc.
||I’m working on a talk for Ignite (Seattle) that has a likelihood of being chosen. How much of a likelihood depends on well I can define this talk about our industry to the technical masses. I could use some feedback.
|Lanette Creamer, Test Lead, Adobe Systems
||Agile Testing Ninjas
||If Adam Goucher learned everything he knows about testing from Pirates, why does Lanette insist that Ninjas hold the key? What is it about stealth, speed, discipline, and skill that translates well to Agile testing?
|Jim Benson, CEO, Modus Cooperandi
||Kaizen or Death
||If we don’t improve, we stagnate. If we stagnate, we perish. Life requires change, renewal, and innovation. Evolution is innovation. In life, in business, in everything, we must continuously improve. In 5 short minutes, Jim Benson will change your life through the Japanese concept of Kaizen (continuous improvement).
|Gary Knopp, SDET, Attachmate
||Chow, Expand Your Idea of Dogfood
||This talk will be a quick case study of how we were able to break out of what seemed like an obvious test strategy (using or extending our existing test system) and move to a lighter-weight, component-based test approach. This new approach includes testing the component in isolation, the usage of the component in isolation, and then the system as a whole. Ultimately we were successful in using this strategy and even completed the testing 2 weeks early.
|Michael Wolf,Perl Trainer, evangelist, programmer, community member, town crier…
||TAP – Test Anything Protocol
Presented by Jim Benson, CEO of Modus Cooperandi
What is QA?
When we seek Quality, we are looking for something more than “Does it break if I do this?” We are looking for improvement, for ascendancy, for something better.
Scripts, exploration, observation.
We look for patterns, we look for exceptions, we look for jarring transitions.
In Personal Kanban and Lean, we look for patterns, exceptions and transitions in the creation of value and the flow of work. Think of it as QA for your workload and your processes. QA for your QA.
On the 12th of May, we discussed how Lean techniques like Personal and Organizational Kanban can be applied to the highly variable workload of a tester. How understanding the variation in work and making judgments based on this understanding will help testers, managers and clients schedule for testing and the impacts on QA of unreasonable demands.
We will, at last, be able to define “unreasonable” and provide meaningful service level agreements.
About our speaker: Jim Benson is a collaborative management consultant. He is CEO of Modus Cooperandi, a consultancy which combines Lean, Agile Management and Social Media principles to develop sustainable teams. Clients include the United Nations, World Bank, Microsoft, and NBC Universal. Jim created the Personal Kanban technique that applies lean thinking to individual and small team work. He blogs at evolving web and is @ourfounder on twitter.
Reducing Test Case Bloat
Presented by Lanette Creamer
We may think we are ready to move onto new and innovative features. However, if we do not deal with the past, it can easily come back to haunt, slowing down new projects, and robbing our testing time unexpectedly, often to the point that testing becomes the bottleneck that slows innovation to a crawl.
For those of us who work on software that already exists, exciting new functionality and improvements are the main things that drive upgrades, as well as compatibility with new platforms. However, if end users cannot trust the quality of the legacy features they rely on, they will be reluctant to upgrade, or worse, your new versions will get a reputation of being unstable – harming overall adoption. In some cases users may downgrade their software to an earlier version because they are unhappy with the quality of the newer release or request an earlier version they feel is reliable.
This paper presentation is about the subjective and difficult part of testing which has no provable mathematical correct answer. It is about risk management, test planning, cost, value, and being thoughtful about which tests to run in the context of your specific project. The discussion covers identifying and reducing test case bloat, when it can be done, who does it, along with a few examples used in practice. Further, it will cover one untested but under test theory, experiences shared in significantly reducing test cases to cover more than three times the applications when the test team reduced from sixteen to four testers. When facing increasingly complex software and growing software, we must balance testing existing features that customers rely on every day with new features and interactions. When balanced in a sensible way, the best of the legacy test cases can be maintained, using existing knowledge to reduce risk as much as possible.
About our speaker: Lanette Creamer is a test lead with 10 years industry experience ranging from product feature testing on early versions of InDesign to leading collaborative end to end workflow testing across the Adobe Creative Suites. Most recently Lanette has been testing on an agile team that is automating the software production process between a product build and actual shipment, to be used on all Adobe shipping products in 2010. Lanette has two published technical paper “Testing for the User Experience”, voted best paper by presentation attendees at PNSQC 2008 and “Reducing Test Case Bloat”, PNSQC 2009. Her magazine article “9 Ways to Recruit Testers for Collaborative Test Events” was published in Software Test & Performance Magazine, in the January 2010 issue. Lanette started writing a testing blog at http://www.testyredhead.com in 2006 and has been writing and collaborating with the online testing community non-stop since.
Presented by Detective Brian Stampfl
Detective Stampfl from the Seattle CSI unit will be joining us to discuss the business of Crime Scene Investigation – a discipline not unlike bug investigation, with processes often used in software testing. We’ll talk about parallels and get some insight into what Crime Scene Investigation is really like, straight from the source.
Detective Stampfl will cover the following:
- The make-up of the Seattle Police Crime Scene Investigation’s Unit
- The type of crimes that they respond to investigate
- Steps necessary to properly document a crime scene
- The differences between the real world and how investigations are portrayed on television, and a current and look into the future of Forensic technology
Detective Brian Stampfl has been involved in law enforcement for over eighteen years. He began his career in 1991 in Southern California with the San Bernardino Police Department, and later joined the Seattle Police Department in 1995. Since joining SPD, Detective Stampfl has worked in patrol and was a field training officer, tasked with training new officers who had just graduated from the academy. Detective Stampfl went on to become an academy instructor and when the opportunity arose, he acquired the title of detective and worked for over three years in the Sexual Assault and Child Abuse Unit. Then in 2005, the opportunity of a lifetime presented itself as the Seattle Police Department sought to create its own Crime Scene Investigations Unit. Detective Stampfl was one of seven detectives chosen not only to staff this new unit, but to assist in the building of this unit from the ground up, which started simply as an idea. In addition to hours of training associated with Crime Scene Investigation, Detective Stampfl is a graduate of the National Forensic Academy in Knoxville, Tennessee and is an adjust faculty member of Seattle University where he teaches a course in Crime Scene Investigation.