All posts by Shelly

March 2014 QASIG -Testing Like Batman

Testing Like Batman

Presented by: Zephan Schroeder, Software Development Engineer in Test at Philips Healthcare

Testing Like Batman slides

http://www.xmind.net/embed/9ZMY/

What’s in your utility belt? What is in your colleague’s utility belt? Where do you get your tools and information? Which hero do you test like?

We will touch on many test tools currently in use and outline categories of tools. We will also explore which tools are effective and when they are appropriate. Along the way we will discuss a few heroes and how your mindset is a critical factor for both success and personal happiness.

About our speaker:
Zephan Schroeder has worked for Microsoft for over 15 years doing technical support, technical editing, program management, and software testing. He currently works at Philips Healthcare as a Senior Software Engineer testing remote service solutions for Philips medical imaging devices located around the world. Zephan also manages the TFS (Microsoft Team Foundation Server) instance providing version control, work item tracking, defect tracking, and release repository for over thirty users across 4+ product teams. Additionally Zephan is responsible for ISO 27001 audit compliance for the Remote Service Solutions development team.

When not chasing bugs Zephan enjoys raising a family of two cats, two dogs, one teenage boy, one teenage girl, and an amazing wife. When time permits he does mentoring, tech coaching, casual volleyball, and online chess (zephans on chess.com).

January QASIG – Testing Science: Breaking the Fourth Wall of Engineering

Presented by: Curtis Stuehrenberg, Software Quality Assurance Manager, Climate Corporation

The modern test engineer has a wide variety of tools aiding them in their quest to not just verify computer software but help make sure it’s “providing perceived value to some one at some time.” However what do you do when your persona, your user stories, and your field trips are simply not enough?

This is a question I’ve found myself facing again and again in my career. I first had to face it when helping to build software designed to aid bond traders at the Federal Home Bank of Seattle. I’m currently facing it as I work with agronomists, statisticians, meteorologists, climatologists, and actuarial risk analysts building products to turn the industry of crop insurance on its ear.

Please join me for an evening of talk about the problems we face when asked to test a value proposition for which we have no context or experience and how I’ve addressed and am currently addressing them.

About our speaker:

Curtis is currently helping the world’s people and businesses adapt to a changing climate as the Software Test and Quality Assurance Manager for the Climate Corporation with engineering offices in San Francisco and Seattle. Prior to joining Climate just this past October Curtis previously experimented with big data collection and machine learning algorithms at Electronic Arts, helped build Accelrys’s industry leading small molecule chemical lab management software, and tried disrupting how phase two and three clinical trial pharma studies are designed and executed with the SF startup Medrio.

November 2013 QASIG Meeting

High Volume Automated Testing for Software Components

Presented by: Harry Robinson and Doug Szabo, Microsoft

View the slide deck: HVTA 2013-11-13

Note from Harry: During the presentation, we showed sequences that expose bugs in sort routines. For those who would like to try their luck, here is the URL that hosts the Sorting Demo: http://www.brian-borowski.com/Software/Sorting. The algorithm we showed is called Shearsort. To get people started, the sequence “8 7 6 5 4 3 2 1 0” succeeds; the sequence “8 1 6 3 4 5 2 7 0” fails.

“Bugs are more insidious than ever we expect them to be.” – Boris Beizer

Would you expect to find bugs in an award-winning library of sorting routines written by professional coders and featured in the 2006 O’Reilly book, Windows Developer Power Tools?

Or, to phrase it differently, which of the following inputs will expose a bug in this well-regarded sorting library?

A. 0 1 2 3 4 5 6 7 8

B. 1 0 3 2 5 6 4 7 8

C. 8 1 6 3 4 5 2 7 0

D. 1 0 3 2 5 4 7 6 8

E. 8 7 6 5 4 3 2 1 0

The bug-exposing input turns out to be input C.

Would you have chosen that sequence for your unit testing? Probably not.

Let a relentless tester and an enlightened developer show you how simple, high-volume automation found insidious bugs that eluded a bevy of well-crafted unit tests. See the results, ask questions, get answers, and find out whether this technique should be part of your toolkit.

About our speakers:

Harry Robinson has been working on and thinking about software testing for a long time, pioneering advanced test generation approaches at Bell Labs, HP, Google and Microsoft over the past 20 years. He currently focuses on test techniques that combine human and machine intelligence. He is Principal SDET for Microsoft’s Windows Embedded team.

Doug Szabo has been developing and breaking software for 20 years across a range of applications from geodesy to 3-D hyperbolic graphs to automated mapping and facilities management systems. His 3-D work provided the visualization interface for Test Model Toolkit, Microsoft’s first model-based testing tool. Doug is a big fan of using programmatic test generation to get machines to do the heavy lifting in test.

September 2013 QASIG Meeting

Human-Scale Test Automation

Presented by: Michael Hunter, Senior SDET, Microsoft

Recently presented at CAST 2013, Michael will give a modified version of his workshop, Human-Scale Test Automation – with audience guidance and input.

Michael has spent the last ten years implementing automation stacks of one form or another. Most of them have been useful. Some have even continued until be useful after he left the team. In helping all these teams converge on a stack that works for them, Michael found two constants: every stack is different, and finding the right stack is hard!

All those implementation details get in the way, even when confident we’ve abstracted them all away. In this workshop we’ll experience this firsthand: we’ll figure out the “right” set of customer actions, implement them in an automation stack where we are the various components, and then execute a few test cases and see what we learn.

About our speaker: While studying architecture in Chicago, IL, I took an internship updating CAD drawings at a major Chicago bank. My desire to make the computer do most of the work turned that internship into a full-time job writing applications for the CAD system as well as for other areas of the bank. At the same time, a major CAD company was looking for people fluent in both CAD and programming – a perfect fit with my experience. The collaboration proved fruitful for both parties; I found lots of issues with the APIs, and the expertise I developed with those APIs led to my first published articles.

My work on AutoCAD brought me a job offer from a competitor and my first full-time testing job. A later acquisition of that company by Microsoft made me a Microsoftie, and I’m somewhat bemused to have now spent thirteen years helping Microsoft test better.

My “You Are Not Done Yet” checklist and other good stuff are at http://www.thebraidytester.com.

July 2013 QASIG Meeting

Lightning Talks

Presented by: Dave Mozealous, Samantha Kalman, and Jacob Stevens

Lightning Talk Speakers and Topics:

Dave Mozealous, Amazon Using image comparison to improve UI testing Dave will present on how tools like Selenium WebDriver and ImageMagick can aid and improve the testing of UI changes for the Web.
Samantha Kalman Design-Level Testing Prototypes can be an effective tool in evaluating the quality of a user experience in the modern, multi-device landscape.
Samantha is an independent game developer, designer, and prototyper with an extensive background in testing practices including positions at Quardev, Unity Technologies, and Amazon.
Jacob Stevens Intro to Mobile Test Automation using Trade Federation Jacob will give a brief overview of how it works and then show what it will do for you and highlight the limitations.

May 2013 QASIG Meeting

Scripted Manual Automated Exploratory Testing

Presented by: Keith Stobie, TiVo

Scripted Manual Automated Exploratory Testing

Manual versus automated is a well-known continuum. Less known explicitly is the scripted versus exploratory dimension and its interaction with manual versus automated.

Join us for the May QASIG to learn about the forces that influence when automation or manual testing is most appropriate and when confirmatory (scripted) or bug finding (exploratory) is most appropriate. Keith Stobie will show the role and benefit of each type (manual scripted, automated scripted, manual exploratory, automated exploratory).

About our presenter: Keith Stobie is a Senior Quality Engineering Architect at TiVo who specializes in web services, distributed systems, and general testing, especially design. Previously he has been Test Architect for Bing Infrastructure where he planned, designed, and reviewed software architecture and tests; and worked in the Protocol Engineering Team on Protocol Quality Assurance Process including model-based testing (MBT) to develop test framework, harnessing, and model patterns. With three decades of distributed systems testing experience, Keith’s interests are in testing methodology, tools technology, and quality process.

Check out his blog (http://testmuse.wordpress.com) to learn more about his work. Keith is a volunteer with SASQAG.org and PNSQC.org and a member of AST, ASQ, ACM, and IEEE. Keith has a BS in computer science from Cornell University. ISTQB FL. ASQ CSQE. BBST Foundations graduate. Keith keynoted at CAST 2007 and MBT-UC 2012 and has spoken at many other international conferences.

March 2013 QASIG Meeting

Anyone can be a test innovator – why not you?

Presented by: Alan Page, Microsoft

Testing Innovation For Everyone – slide deck

The software tester’s nature for system thinking, and for identifying problems and patterns makes them well-suited for innovation, yet few testers take the time to apply their skills and experience to this end. Successful innovation is not purely a matter of skill, intelligence, or luck. Innovation begins with careful identification and analysis of a problem, obstacle, or bottleneck; followed by a solution that not only solves the problem, but frequently solves it in a way that has widespread benefit – or in a way that changes the basic nature of the problem entirely.

Alan Page breaks down the cogs and wheels of innovation and shows examples of how some testers are applying game-changing creativity to discover new ways to improve tests, testers, and testing on their organizations. Problems, solutions, tips, tricks, and more are all on the radar for this whirlwind tour of pragmatic test innovation. Best of all, you’ll walk away knowing that anyone, especially you, can be a test innovator.

About our presenter: Alan Page is currently a Principal SDET (yet another fancy name for tester) on the Xbox console team at Microsoft, Alan has previously worked on a variety of Microsoft products including Windows, Windows CE, Internet Explorer, and Office Lync. He also spent some time as Microsoft’s Director of Test Excellence where he developed and ran technical training programs for testers across the company.

Alan is edging up on his 20th anniversary of being a software tester. He was the lead author on the book How We Test Software at Microsoft, contributed chapters for Beautiful Testing (Adam Goucher/Tim Riley) on large-scale test automation and Experiences of Test Automation: Case Studies of Software Test Automation (Dorothy Graham/Mark Fewster). You can follow him on his blog (http://angryweasel.com/blog) or on twitter (@alanpage).