Human-Scale Test Automation
Presented by: Michael Hunter, Senior SDET, Microsoft
Recently presented at CAST 2013, Michael will give a modified version of his workshop, Human-Scale Test Automation – with audience guidance and input.
Michael has spent the last ten years implementing automation stacks of one form or another. Most of them have been useful. Some have even continued until be useful after he left the team. In helping all these teams converge on a stack that works for them, Michael found two constants: every stack is different, and finding the right stack is hard!
All those implementation details get in the way, even when confident we’ve abstracted them all away. In this workshop we’ll experience this firsthand: we’ll figure out the “right” set of customer actions, implement them in an automation stack where we are the various components, and then execute a few test cases and see what we learn.
About our speaker: While studying architecture in Chicago, IL, I took an internship updating CAD drawings at a major Chicago bank. My desire to make the computer do most of the work turned that internship into a full-time job writing applications for the CAD system as well as for other areas of the bank. At the same time, a major CAD company was looking for people fluent in both CAD and programming – a perfect fit with my experience. The collaboration proved fruitful for both parties; I found lots of issues with the APIs, and the expertise I developed with those APIs led to my first published articles.
My work on AutoCAD brought me a job offer from a competitor and my first full-time testing job. A later acquisition of that company by Microsoft made me a Microsoftie, and I’m somewhat bemused to have now spent thirteen years helping Microsoft test better.
My “You Are Not Done Yet” checklist and other good stuff are at http://www.thebraidytester.com.
Presented by: Dave Mozealous, Samantha Kalman, and Jacob Stevens
Lightning Talk Speakers and Topics:
|Dave Mozealous, Amazon||Using image comparison to improve UI testing||Dave will present on how tools like Selenium WebDriver and ImageMagick can aid and improve the testing of UI changes for the Web.|
|Samantha Kalman||Design-Level Testing||Prototypes can be an effective tool in evaluating the quality of a user experience in the modern, multi-device landscape.
Samantha is an independent game developer, designer, and prototyper with an extensive background in testing practices including positions at Quardev, Unity Technologies, and Amazon.
|Jacob Stevens||Intro to Mobile Test Automation using Trade Federation||Jacob will give a brief overview of how it works and then show what it will do for you and highlight the limitations.|
Scripted Manual Automated Exploratory Testing
Presented by: Keith Stobie, TiVo
Manual versus automated is a well-known continuum. Less known explicitly is the scripted versus exploratory dimension and its interaction with manual versus automated.
Join us for the May QASIG to learn about the forces that influence when automation or manual testing is most appropriate and when confirmatory (scripted) or bug finding (exploratory) is most appropriate. Keith Stobie will show the role and benefit of each type (manual scripted, automated scripted, manual exploratory, automated exploratory).
About our presenter: Keith Stobie is a Senior Quality Engineering Architect at TiVo who specializes in web services, distributed systems, and general testing, especially design. Previously he has been Test Architect for Bing Infrastructure where he planned, designed, and reviewed software architecture and tests; and worked in the Protocol Engineering Team on Protocol Quality Assurance Process including model-based testing (MBT) to develop test framework, harnessing, and model patterns. With three decades of distributed systems testing experience, Keith’s interests are in testing methodology, tools technology, and quality process.
Check out his blog (http://testmuse.wordpress.com) to learn more about his work. Keith is a volunteer with SASQAG.org and PNSQC.org and a member of AST, ASQ, ACM, and IEEE. Keith has a BS in computer science from Cornell University. ISTQB FL. ASQ CSQE. BBST Foundations graduate. Keith keynoted at CAST 2007 and MBT-UC 2012 and has spoken at many other international conferences.
Anyone can be a test innovator – why not you?
Presented by: Alan Page, Microsoft
Testing Innovation For Everyone – slide deck
The software tester’s nature for system thinking, and for identifying problems and patterns makes them well-suited for innovation, yet few testers take the time to apply their skills and experience to this end. Successful innovation is not purely a matter of skill, intelligence, or luck. Innovation begins with careful identification and analysis of a problem, obstacle, or bottleneck; followed by a solution that not only solves the problem, but frequently solves it in a way that has widespread benefit – or in a way that changes the basic nature of the problem entirely.
Alan Page breaks down the cogs and wheels of innovation and shows examples of how some testers are applying game-changing creativity to discover new ways to improve tests, testers, and testing on their organizations. Problems, solutions, tips, tricks, and more are all on the radar for this whirlwind tour of pragmatic test innovation. Best of all, you’ll walk away knowing that anyone, especially you, can be a test innovator.
About our presenter: Alan Page is currently a Principal SDET (yet another fancy name for tester) on the Xbox console team at Microsoft, Alan has previously worked on a variety of Microsoft products including Windows, Windows CE, Internet Explorer, and Office Lync. He also spent some time as Microsoft’s Director of Test Excellence where he developed and ran technical training programs for testers across the company.
Alan is edging up on his 20th anniversary of being a software tester. He was the lead author on the book How We Test Software at Microsoft, contributed chapters for Beautiful Testing (Adam Goucher/Tim Riley) on large-scale test automation and Experiences of Test Automation: Case Studies of Software Test Automation (Dorothy Graham/Mark Fewster). You can follow him on his blog (http://angryweasel.com/blog) or on twitter (@alanpage).
Agile and Quality – How Can They Work Together? A Panel Discussion.
Moderated by: Jacob Stevens, Quardev, Inc.
Panel Members: Joy Shafer, Quardev, Inc.; Uriah McKinney, Deloitte Digital; and Shawn Henning, Deloitte Digital
Join us for a panel discussion on Agile practices and quality – hear from industry professionals from various company sizes who work in Agile environments on how they mitigate risk and incorporate quality best practices. The panel will take questions from the moderator and audience and is sure to be a great discussion!
Jacob Stevens is a Senior Test Lead at Quardev, Inc. and a ten year QA veteran with over 40 industry leading clients. The scope and scale of the projects, and the types of platforms, technologies and methodologies he’s worked with have been widely varied. Jacob studied under Jon Bach to adopt a context-driven approach to test design. One of Jacob’s favorite topics in QA is epistemology. How do we know that our test results are accurate? How do we ensure inherent biases in our test execution methodologies do not manifest in false positives or false negatives? Jacob enjoys talking technology and many other subjects on Twitter @jacobstevens. Jacob is also a little uncomfortable writing about himself in the third person.
About our panel members:
Joy Shafer is currently a Consulting Test Lead at Quardev on assignment at Alaska Airlines. She has been a software test professional for almost twenty years and has managed testing and testers at diverse companies, including Microsoft, NetManage and STLabs. She has also consulted and provided training in the area of software testing methodology for many years. Joy is an active participant in community QA groups. She holds an MBA in International Business from Stern Graduate School of Business (NYU). For fun she participates in King County Search and Rescue efforts and writes Fantasy/Sci-fi.
Uriah McKinney has been deeply involved in mobile quality assurance since the beginning of the 3rd mobile revolution (circa 2008). Throughout his tenure with Deloitte Digital (formerly, Übermind), Uriah has balanced client engagements on iOS, Android, and mobile web projects with developing a methodological framework for quality assurance specifically tailored to the intersection of mobile and agile development. Uriah is one of the founding members of the Center of the Agile Universe meetup (http://centeroftheagileuniverse.com/); the Product Owner of the upcoming Mobile Agile Quality Conference (http://maqconference.com/); and apparently not above shameless cross-promotion.
Shawn Henning is part of the Agile transformation at Deloitte Digital (formerly Übermind). As both a Senior QA engineer and Scrum Master he helps teams iteratively deliver world class mobile software. He is passionate about working closely with clients to regularly deliver working code, organically growing a completed product through constant feedback and iteration. Shawn has over fifteen years of experience in Quality Assurance in both desktop and mobile software. He attended his first Open Space Technology conference a year ago and was struck by the power of this format to foster conversations which resulted real and practical answers to participants problems. He has since attended many OST and LEAN Coffee events and helped to organize last year’s highly successful Mobile, Agile, Quality conference: MAQCon.
Leaping into “The Cloud”: Rewards, Risks, and Mitigations
Presented by: Ken Johnston and Seth Eliot, Microsoft
The cloud has rapidly gone from “that thing I should know something about” to the “centerpiece of our corporate IT five-year strategy.” However, cloud computing is still in its infancy. Sure, the marketing materials presented by cloud providers tout huge cost savings and service level improvements—but they gloss over the many risks such as data loss, security leaks, gaps in availability, and application migration costs. Ken Johnston and Seth Eliot share new research on the successful migrations of corporate IT and web-based companies to the cloud. Ken and Seth lay out the risks to consider and explore the rewards the cloud has to offer when companies employ sound architecture and design approaches. Discover the foibles of poor architecture and design, and how to mitigate these challenges through a novel Test Oriented Architecture (TOA) approach. Take back insights from industry leaders—Microsoft, Amazon, Facebook, and Netflix—that have jumped into the cloud so that your organization does not slam to the ground when it takes the leap.
About our speakers:
Seth Eliot is Senior Knowledge Engineer for Microsoft Test Excellence focusing on driving best practices for services and cloud development/testing across the company. He previously was Senior Test Manager, most recently for the team solving exabyte storage and data processing challenges for Bing, and before that enabling developers to innovate by testing new ideas quickly with users “in production” with the Microsoft Experimentation Platform (http://exp-platform.com). Testing in Production (TiP), software processes, cloud computing, and other topics are ruminated upon at Seth’s blog athttp://bit.ly/seth_qa and on Twitter (@setheliot). Prior to Microsoft, Seth applied his experience at delivering high quality software services at Amazon.com where he led the Digital QA team to release Amazon MP3 download, Amazon Instant Video Streaming, and Kindle Services.
Ken Johnston is a frequent presenter, blogger, and author on software testing and services. Currently he is the Principal Group Program Manager for the Bing Big Data Quality and Measurements team. Since joining Microsoft in 1998 Johnston has filled many other roles, including test lead on Site Server and MCIS and test manager on Hosted Exchange, Knowledge Worker Services, Net Docs, MSN, Microsoft Billing and Subscription Platform service, and Bing Infrastructure and Domains. Johnston has also been the Group Manager of the Office Internet Platforms and Operations team (IPO) and for two and a half years (2004-2006) he served as the Microsoft Director of Test Excellence. He earned his MBA from the University of Washington in 2003. His is a co-author of “How we Test Software at Microsoft” and contributing author to “Experiences of Test Automation: Case Studies of Software Test Automation.” To reach Ken contact him through twitter @rkjohnston.