<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=752538731515435&amp;ev=PageView&amp;noscript=1">

My career in quality assurance started suddenly. I was a stay-at-home mom with a friend who had an opportunity at a large computer company as an SAP tester. I walked in the door my first day, not knowing a thing about test cases, Agile methodology, defects, or anything to do with testing for that matter. I put my nose to the grind and I learned as I went, traveling the country doing QA contracts as an independent contractor. I fell in love with finding defects, and I still love it just as much.

I was nervous to leave the contracting world because I liked the freshness of a new contract, but becoming a consultant with Omni has given me the ability to be placed on different projects. When one is finished, they have another one lined up for me, so I still get the variety.

My day starts with coffee. Always coffee. I don’t have a set time that I need to be to the office; my job is pretty flexible so long as you get your hours in and your work done and go to your meetings on time. I’m an early riser so I like to start pretty early. I typically know what I’m going to be doing each day, due to the sprint planning sessions, yesterday’s accomplishments, and the release schedule.

Then I check my email. What did I miss since I stopped working yesterday, or what came in overnight? Same with Slack. We use Slack for a lot of our projects and I find that checking the messages there is just as important as checking my email.

Then I attend my daily stand up (DSU) for each project that I’m on. This is where the team individually shares what they worked on yesterday and what they plan to work on today. It’s important to hear the developer’s plans for the day so I know if any releases are coming out or if there are any fixes I need to check.

After the DSU, I check my tickets in TFS or Jira to see if any updates have been made, and then I dive in to work. Diving in to work can mean one of many things for me…

Analyzing for testing:

Do we need a test plan created? If so then I draft a test plan that gives direction on what will be tested and how. Then I break it down in to smaller tasks and test cases.

I’ll take a look at what features are being worked on this sprint and analyze where my testing might come in to play. Does the ticket have enough details to write test cases, or do I need to meet with the developer to get more information? If possible, I write the test cases based off of what I think would need to be tested, fully knowing that these are ever changing as the ticket goes through the development phase.

Deployment is done. Testing:

After a deployment, I test the main functionality of the application to make sure nothing got broken when new code was deployed. This is called smoke testing, and this occurs after every deployment. Then I launch in to testing the highest priority ticket in the sprint. If I wasn’t able to develop all of the test cases based off the ticket details, then I resume writing the test cases and test as I go along.

Defect logging:

While testing, if I happen to run in to a bug/defect, it’s my job to document what I was doing when I found this bug and put it in the ticket. I usually do this through screen shots or, in some instances, videos. I lay out the steps to re-create the bug and log the defect in TFS or Jira (or whatever test management software the project is using) and send it over to the developer. The developer then fixes the code and re-releases it to the QA environment for me to re-test.

Regression testing:

Once every so often there is a release to production, and before that release I’m required to test all the areas of the application that may have been affected by the code deploys to the QA environment. So, I test all the areas that could have been touched. This is called regression testing. This occurs at different times based on different projects. On one of my current projects, it’s happening every two sprints. This is a deep dive into the application, as opposed to the surface testing done in smoke testing.

Once I’m done with the regression testing, code is deployed to production, and I cross my fingers that I didn’t miss anything!

Being a tester just means that you have to look at things and test in a way that may not have been thought about when the code was developed. You ask “what if” a lot. What if I pushed this button instead of that button, will it break? The team also looks at you to see what you think of the user experience. Does it make sense that you push this button after that button? Does the application flow nicely?

You’re the last line of defense before user acceptance testing, and you want to catch as much as you can! I had a developer tell me the other day, “Julie, you find the oddest bugs.” It was a proud moment.

Want more of the O2 Culture Blog? You got it.


Blog subscribers get email updates once a week.

Subscribe to Email Updates

ABOUT THE AUTHOR

Julie Helzer is a Solutions Consultant at Omni. She contributes to the Quality Assurance/Business Analyst team. She has over 6 years of experience in the Information Technology field. Julie spent 5 years traveling the country for various SAP Quality Assurance contracts before settling in to Omni as a Solutions Consultant. Her experience includes Agile Methodology, web based application testing, mobile testing, SQL testing, QA team leadership, and QA mentoring. Her passion lies in QA of mobile technology and she spends her free time exploring the world of mobile technology.

Omni Resources is a premier custom software development firm focused on building web-based & mobile applications, business process automation and data management solutions for manufacturing, healthcare, insurance, retail and SaaS companies.

WHY OMNI?