Robust and reliable Android app automation with Espresso
Satyajit Malugu, GoDaddy
Satyajit Malugu, GoDaddy
Quality in software delivery and sustainment will always have competing pressures between capability desired, cost incurred, and time taken. And of course these impact measurable quality delivered.
But even now the technology landscape adds additional pressures we need to understand to be most effective in keeping quality high.
At the QASIG we’ll discuss these factors as well as new perspectives (and revisit some baseline tenets) on planning and readiness for software quality going forward.
About our speaker: Brian Gaudreau has successfully delivered software services and solutions for over 20 years. Experienced leader delivering software quality and continuous improvement of products and processes.
Specialties: Software Delivery and Quality, Process Analysis and Improvement, Program Management, PCI (Payment Card Industry) compliance, Resource Management including offshore teams, ITIL, Regulatory and Compliance testing/audit, Cloud technologies, SaaS, IaaS, PaaS, Test environments and test methodologies. Certified Scrum Master.
In January, and all year, we’ll be focusing on the future of QA – where the industry is heading, how can we best add value, what skills should be developing and refining, and what software, tools and/or code should we learn?
We are excited to have a great panel – QA representatives from local companies who will help us answer some of these questions. We are happy to welcome to the following colleagues:
Moderated by Andy Fox, Software Design Engineer in Test, Quardev
Service virtualization in Action: How Alaska Airlines tests for snow storms in July
Presented by Ryan Papineau, Automated Testing Engineer, Alaska Airlines
Why did Alaska Airlines receive J.D. Powers’ “Highest in Customer Satisfaction” recognition for 8 years straight, plus the “#1 on-time major North American Carrier” award for the last 5 years? A large part of the credit belongs to their software testing team’s proactive approach to disrupting the traditional software testing process. The team uses advanced test automation in concert with service virtualization to rigorously test their complex flight operations management application, which interacts with myriad dependent systems (fuel, passenger, crew, cargo, baggage, aircraft, and more). The result: operations that run smoothly—even if they encounter a snowstorm in July. Attend this session to get a first-hand account of how Alaska Airlines leverages service virtualization to address common testing challenges and to learn Alaska Airlines’ best practices for managing the complexity of multiple dependent systems for testing.
About our speaker: Ryan uses systems engineering, cross-team collaboration, along with data analytics to provide complex test environments that behave like production.
Leading Change from the QA team
Most efforts to request or implement changes fail. They fail often enough that the “change curve” for organization change is derived from the grieving process when a loved-one dies. Shock, denial, anger, and fear are experienced before the organization starts accepting the change and committing to it. These change efforts fail somewhere between shock and fear.
Yet, the opportunity for change is large, especially when it comes to quality. We all know that preventing bugs is better than finding them. We also know that finding bugs earlier is better than finding them late. Since testing is often done late in the development cycle, when we want to drive a change it usually involves asking other teams to change their behavior.
In this session, I will show you a 4-step process for leading change, and illustrate the process with several examples of successful process changes that lead to better quality and testing. Included with the 4-step process will be a variety of tools which have proven very useful in influencing those teams outside of your direct control.
Included with each of these steps will be a number of tools, methods, and examples to successfully implement the change.
About our speaker: John Ruberto has been developing software in a variety of roles for 30 years. He has held positions ranging from development and test engineer to project, development, and quality manager. He has experience in Aerospace, Telecommunications, and Consumer software industries. Currently, he is a Director of Quality Engineering at Concur. He received a B.S. in Computer and Electrical Engineering from Purdue University, an M.S. in Computer Science from Washington University, and an MBA from San Jose State University.
Frank Charlton, Software Test Lead, Sonos
Data is Your Friend
As testers there are so many things that we can be doing with data/metrics/analytics. Are you making the most of data?
About: Frank Charlton likes to break things and think through problems in creative ways. He found his way into testing after a career in the music industry where (it turns out) he was building the same kinds of skills he utilizes to this day. He is a Software Test Lead for Sonos where he works primarily on their applications, and tries to ensure the Seattle office is blasting good music throughout the day. With a strong passion for UI, he works in a tight-knit team of developers, UX designers, product managers, and researchers to ensure that users are always given the best experience, without being weighed down by technology. Frank attempts to write at frankcharlton.me and tries to be funny sometimes on Twitter @frankcharlton. When not sitting in front of a computer he can be found singing karaoke, camping, or tasting cocktails.
Jason Thane, Software Developer and Co-Founder, General UI
Pairing means working in teams of two developers with one computer, two keyboards, and two mice. One driver, one navigator. I will talk about why it’s helpful and what we get out of it at General UI. Notably, you solve problems differently when talking about the problem as a pair rather than when just writing code in solo fashion.
About: Jason Thane is a software developer and co-founder of General UI, a Seattle-based developer community built on principles of communication and trust.
Testing vs. Acceptance Criteria Checking
We have come a long way from requirements to user stories. User stories have acceptance criteria now. We have zero defect policies for code merges, we have 100 % test case pass results. But we still manage to release products with big and humongous bugs via inconsistent User experience, inconsistent messaging, variety of UI elements that hugely vary from one page to another. And most of all the logic in the code is also inconsistent. The customers can see these issues how did the testers miss them?
About: Srilu Pinjala has 11 years experience in testing, test management, and automating. She has worked at Lowes, Expedia, Porch, Amazon, IBM and more places where test cases were non-existent or the existing test cases were abysmal. She believes in exploring the product and fixing the requirements not the other way around. She uses the same approach to write test cases too. She enjoys testing customer facing applications. Connect with her –https://www.linkedin.com/in/srilupinjala; More fun stuff at –http://qasrilu.blogspot.com/
Satyajit Malugu, Senior SDET, GoDaddy
JS is different kind of language from static languages Java, C# and even dynamic languages like Ruby/Python. A reliable automation suite requires test tagging, synchronized waits, parallel test runs and other features to make it work in an organization. This talk will walk through these aspects.
About: Satyajit is a Senior SDET focussing on mobile testing at GoDaddy, Kirkland, Washington, USA. His work involves automation of native and hybrid apps and providing test perspective to his team that recently converted from waterfall to an agile process. Before Godaddy he worked at Urbanspoon and gained SDET knowledge at Microsoft. As a testing leader in a company that is deploying a suite of native apps, he in involved with strategizing, executing and mentoring other teams on best practices for native mobile testing. He presented on mobile topics at various conferences He blogs at www.mobiletest.engineer and you can find him on twitter at @malugu
MetaAutomation: Quality Automation for Software that Matters
Do you do automation for software quality? I have and still do. Years ago, I realized: we’ve all been doing it somewhat wrong. So, I set out to reassess and re-define the big picture.
The result is MetaAutomation, a pattern language for vastly greater business value with quality measurements. It’s more than how NOT to do it wrong; it’s about how to create vastly better business value than is available with conventional patterns and practices.
This lightning talk gives a launching point to quality automation for software that matters.
About: Matt Griscom has 30 years’ experience creating software including innovative test automation, harnesses and frameworks. Two degrees in physics primed him to seek the big picture in any setting. This comprehensive vision and a love of solving difficult and important problems led him to create the MetaAutomation pattern language to enable more effective automated verifications for software quality measurement. Matt blogs on MetaAutomation and publishes books on that topic to lead the entire software quality process through a quantum leap in improved productivity, communication and business value. Matt loves helping people solve problems with computers and IT, and is available by email at email@example.com.