Using IaC to reduce costs and build a HIPAA (Health Insurance Portability and Accountability Act) compliant infrastructure

2 05 2025

Level Up Your AWS Deployments with Terraform and IaC: A Deep Dive into a Health Startup’s Solution

In today’s fast-paced development landscape, managing infrastructure manually is not only time-consuming but also prone to errors and inconsistencies. This is where Infrastructure as Code (IaC) comes to the rescue, allowing you to define and manage your infrastructure using code. It also allows us to build various aspects of HIPAA and ensure that we can work on items like blue/green deployment and disaster recovery. Terraform, a popular open-source IaC tool, empowers developers and operations teams to provision and manage infrastructure across various cloud providers efficiently and reliably.

A forward-thinking health startup, I have been a part of, has embraced the power of Terraform to build a robust and automated deployment pipeline for their application on Cloud based Infrastructure. Their open-source repository highlights a comprehensive solution that tackles key aspects of modern application deployment. Let’s delve into the highlights of their approach.

Building Blocks of Automation:

This health startup’s Terraform solution isn’t just about spinning up virtual machines; it’s a holistic approach that encompasses several critical components:

  • Version Control Integration: Recognizing the importance of code management, their configuration includes steps to clone their application’s backend and frontend repositories directly onto the provisioned infrastructure. This tight integration ensures that the latest application code is readily available for deployment.
  • Docker-Powered Deployments: Embracing containerization, the deployment scripts are designed to build Docker images using provided Dockerfiles. A clever addition is the inclusion of the commit hash in the image tags, providing valuable traceability and versioning for deployments.
  • Secure Access with SSH Key Management: Security is paramount, especially in the healthcare domain. This startup’s solution incorporates a dedicated script (ssh_key_setup.sh) to securely deploy GitHub SSH keys to the provisioned servers. This enables secure cloning of private repositories without the need for manual key management on each instance.
  • End-to-End Automated Deployment: The deployment scripts orchestrate the entire process, from cloning/updating the application code and building Docker images to starting the containers with the necessary environment variables. This automation significantly reduces manual intervention and the potential for human error.

Architecting for Scalability and Security on Cloud Infrastructure:

This health startup’s Terraform configuration demonstrates a well-thought-out architecture on Cloud Infrastructure. I am using Azure Cloud and its specific details in the below example:

  • Network Segmentation: The infrastructure is designed with separate subnets for the frontend, backend, and databases. This network segmentation enhances security by isolating different tiers of the application, crucial for protecting sensitive health data.
  • Robust Security Posture: Network Security Groups (NSGs) are properly configured to control network traffic in and out of the subnets, ensuring that only necessary ports are open and communication is restricted to authorized sources, a vital aspect for HIPAA compliance and data privacy.
  • Managed Database Services: Leveraging Azure’s managed database services provides scalability, reliability, and reduced operational overhead, allowing the startup to focus on their core healthcare mission:
    • Azure MySQL: For relational data storage.
    • Azure Cosmos DB (MongoDB API): Catering to NoSQL needs with flexibility and global distribution capabilities for potentially large patient datasets.
    • Azure Redis Cache: Implementing an in-memory data store for improved application performance through caching, ensuring a responsive user experience for healthcare professionals and patients.
  • Scalable Compute: The solution provisions two virtual machines, one for the frontend and one for the backend applications. These VMs are based on Ubuntu with automatic updates and Docker pre-installed, ensuring a secure and container-ready environment for handling health-related workloads.
  • Reliable Networking: Public IP addresses with DNS names provide easily accessible endpoints for the applications. Furthermore, the inclusion of Azure DNS for domain configuration simplifies domain management.
  • Secure HTTPS with Let’s Encrypt: Implementing HTTPS is crucial for modern web applications, especially those dealing with personal health information. This startup’s solution automates the setup of HTTPS using Let’s Encrypt certificates, ensuring secure communication with end-users and maintaining data confidentiality.
  • Automated Initialization: Bash scripts are used to configure the VMs upon startup. This includes setting up Nginx as a reverse proxy and automating the process of obtaining and renewing SSL certificates, minimizing manual configuration and maintenance in a highly regulated environment.

Deploying the Health Startup’s Infrastructure and application via the IaC Script:

Deploying this infrastructure, along with the required scripts to pull the code from GitHub and then build them into the Docker images and run the application is defined in the following steps:

  1. Prerequisites: Ensure you have the Azure CLI and Terraform installed on your local machine.
  2. Terraform Setup:
    • Copy terraform.tfvars.example to terraform.tfvars and populate it with your specific Azure credentials and configuration values.
    • Run terraform init to initialize the Terraform working directory and download necessary provider plugins.
    • Execute terraform plan to preview the infrastructure changes that Terraform will apply.
    • Apply the changes by running terraform apply.
  3. Deploy SSH Keys: After the infrastructure is created, run the ssh_key_setup.sh script to securely deploy your GitHub SSH key to the VMs.
  4. Trigger Deployment: The automated deployment script on the VMs will then clone the repositories, build the Docker images, and start the application services with the correct configurations.

Continuous Updates:

Maintaining a live application, especially in the healthcare sector where timely updates and security patches are critical, requires ongoing attention. This startup’s solution simplifies this process:

  • In-Place Updates: To deploy new versions of the application, you can simply SSH into the server and run sudo /app/deploy.sh. This script will pull the latest code, rebuild the Docker image, and restart the service.
  • Continuous Deployment via Cron Jobs: The initialization scripts also set up continuous deployment through cron jobs, which can automatically pull updates and handle SSL certificate renewals, further reducing manual intervention and ensuring the platform remains secure and up-to-date.

Docker Image Management:

The steps for managing Docker images would be a simple CI/CD pipeline:

  1. Build and Push: Build Docker images for both the frontend and backend applications.
  2. Container Registry: Push these images to a container registry, such as Docker Hub or Azure Container Registry, ensuring secure storage of application artifacts.
  3. Update Configuration: Update the Docker Compose files on the VMs to pull the newly built images.
  4. Deploy: Run the deployment scripts to start the new containers.

Conclusion:

This health startup’s open-source Terraform repository provides a valuable blueprint for deploying modern applications on Azure using Infrastructure as Code principles, particularly relevant for organizations handling sensitive data. Their solution effectively leverages Terraform to automate infrastructure provisioning, integrates seamlessly with Docker and version control, and prioritizes security and maintainability. By adopting such an approach, development teams in the healthcare industry can significantly streamline their deployment processes, reduce errors, and focus on delivering innovative solutions while adhering to stringent regulatory requirements. This repository serves as an excellent learning resource for anyone looking to level up their Azure deployments with Terraform and embrace the power of IaC in a security-conscious environment.





Transforming the Digital landscape: The DevOps & SecOps Advantage

19 05 2024

In today’s rapidly evolving digital landscape, a country’s Government Departments face unique challenges that demand innovative solutions. As we lead a mission-critical digital transformation, building secure infrastructure that delivers exceptional user experiences, these are some insights from my journey Software Engineering professional.

The journey begins from identifying what technologies will jell together to provide a performance driven outcome for the application. Majority of the applications being browser based, we will use JavaScript/TypeScript with a NodeJS backend, all tied together in a package.json, which defines the versions, streamlining & synchronising the language with its enhanced products. Then comes the subject of which cloud (and there is no going back on this for distributed applications), we thought of going with Azure (serverless or not is another aspect, another hot topic within the team). Weaving them together with a GraphQL REST API design. And the top king-maker is a deployment pipeline, which is managed by the wazir, who is the infrastructure as code coordinator – Terraform.

Now having established what and where we want to start, let’s work our way back to why we started in the first place.

The Cost of Traditional Development

Let’s start with something most from the software generation before this would be familiar with – traditional development and deployment models. These often come with a hidden costs that often go unrecognized:

  • Time delays: Manual testing and deployment processes can add weeks to release cycles
  • Security vulnerabilities: Late-stage security testing often results in costly rework
  • Resource inefficiency: Siloed teams create redundant work and communication overhead
  • Technical debt: Quick fixes without proper automation create long-term maintenance burdens

Taking these aspects into mind, and the technology stack we have named above, let’s work out the details via the DevOps + Security stream, that has become a way of working in Software. Operational development with Security being a major concern for deploying to the Cloud.

The Development with Security Revolution, combined with Operational Resources

The journey towards building a seamless approach to building, unit testing and deploying an application on the Cloud, so that it can then be UAT’ed by the Business begins with the exciting tasks of actually coding it up and ensuring it run in the local ‘Development’ environment. Sometimes it involves a local Hyper-V or a Container on the developers sandbox local laptop, or in a development box, which a developer can remote to.  

Looking back at my experience implementing development and product management practices across organizations from Microsoft and Yahoo! to Federal Government projects, have witnessed firsthand how integrated security, development, and operations dramatically transform delivery capabilities.

Implementing continuous integration and deployment practices has consistently delivered measurable benefits:

  • 30-40% reduction in deployment time through YAML pipeline automation
  • Security vulnerabilities detected 80% earlier in the development lifecycle
  • Infrastructure costs reduced by 25% through proper cloud resource management
  • Developer productivity increased by 35% with automated testing frameworks

Real-World Impact

During my work on ages-old projects, we went through plenty of transformation aspects of service delivery by implementing Azure DevOps pipelines and infrastructure as code, which eliminated manual configuration errors and reduced deployment windows from days to hours.

Before:

– 3-5 days for environment provisioning

– 48+ hours for security validation

– 70% of team time spent on maintenance

After:

– 45 minutes for environment provisioning

– Continuous security validation

– 70% of team time spent on innovation

Crucially, at the heart of our development philosophy lies DevSecOps. Integrating security practices early and throughout the development lifecycle isn’t just about building a secure platform – it’s about significant savings in time, effort, and ultimately, cost.

  • Early Threat Detection: By embedding security checks within our CI/CD pipelines, we can identify and address vulnerabilities early on, preventing costly rework and delays that often arise from discovering security issues late in the development process.
  • Automation for Efficiency: Automating security testing and infrastructure provisioning with IaC and YAML pipelines reduces manual effort, freeing up our talented team to focus on building innovative features rather than repetitive tasks. This accelerates delivery and minimizes the risk of human error.
  • Reduced Remediation Costs: Addressing security vulnerabilities in the early stages is significantly cheaper than fixing them after deployment. DevSecOps helps us shift left, catching issues before they become expensive problems.

The Path Forward

Whether you’re a technical leader exploring new opportunities or an organization seeking to transform your delivery model, the message is clear: DevSecOps (DevOps with Security aspects) isn’t just about tools and processes—it’s about creating a culture where security, quality, and agility coexist. Proving that even highly regulated environments can move quickly without compromising security.

The question isn’t whether you can afford to adopt DevSecOps practices—it’s whether you can afford not to.






DevOps: Skillset, but with a new Mindset

1 01 2020

https://itnext.io/do-not-put-devops-in-a-cage-3604a83821e1

Joe McKendrick wrote in an article at ZDNet about Devops — ‘requires multiple teams to work closely with each other, side by side, on a day-to-day basis, to meet the significantly shrunken delivery timelines.’

At my current organization, we have been working towards achieving the goal of Agile sprints and a DevOps/CloudOps culture. The attempts have been sincere, but the mindset change requires a lot of effort from both the management and the people working across the projects.

The management needs to understand that the workflow of a Dev+Ops cycle needs a lot of hand-holding and a certain degree of automation across the development, build, deploy and test phases of the application. On the other hand, people who work on the projects need to ensure that they work towards a goal of making tasks automated and easy to build and deploy through the use of scripts. All this entails that the test/QA team are involved with the design/development process from the requirement analysis phase. This is currently missing. And that works out against the concept of an Agile DevOps view.

DevOps can be a powerful antidote to the issues of Agile not working, when it is done properly, with a view to achieving an outcome beneficial for both the Organization and the Individual.

“Automating the testing and the QA aspects can deliver an ROI up to 250% to 300% month over month, according to Chris DeGonia, director of QA at International SOS. In a recent podcast with Kalyan Rao Konda, president and head of the North America East business unit at Cigniti, he credits the ability to automate the flow, across repeatable processes, checks, and balances in the system.”

https://opensource.com/article/19/5/values-devops-mindset

The skillsets required for a DevOps project/practice to get success is already present in most of the team members, developers know how to use scripts and have worked with Puppet, Chef and Ansible and with CI/CD tools; the test team similarily has a good grip of C#, Jenkins, shell scripts, automation, performance and CI/CD tools. Most of the team have worked with Cloud and related Docker and Kubernetes systems too. But the mindset is something that needs to make a change:

  • Start small, so debugging becomes easy
  • Break stuff, so that you know where and what is going wrong
  • Embrace your mistakes and rectify them fast
  • Educate each other on tools and fixes
  • Project management needs disruption, don’t be caught up on costs and timelines
  • Promote collaboration with the team members and business stakeholders

All these would result in a DevOps (Agile, Collaborative) culture, where the following would hold true:

  • Collaboration between the development teams and the business
  • Faster and on-time delivery of products/projects
  • Employee engagement and happiness (they get to learn and implement the learnings)
  • Innovation in the form of the smaller increments, where direction can be changed with nimbleness and finesse.

Thus, teams need to embrace change and provide more guidance to each other to ensure that DevOps with CI/CD can be implemented successfully. The DevOps practise helps in the fast and improved delivery of the product/application, using tools and scripts to automate the build, deploy and testing of the software code.

In conclusion, there are six basic principles that define a DevOps mindset (as mentioned in the DevOps article on ZDNet):

  • Be about serving the customer: “DevOps organizations require the guts to act as lean startups that innovate continuously, pivot when an individual strategy is not (or no longer) working, and constantly invests in products and services that will receive a maximum level of customer delight.”
  • Create with the end in mind: IT organizations “need to act like product companies that explicitly focus on building working products sold to real customers, and all employees need to share the engineering mindset that is required actually to envision and realize those products.”
  • Encourage end-to-end responsibility: “Where traditional organizations develop IT solutions and then hand them over to operations to deploy and maintain these solutions, in a DevOps environment teams are vertically organized such that they are fully accountable from concept to grave.”
  • Promote cross-functional autonomous teams: DevOps teams “need to be entirely independent throughout the whole lifecycle,” and even “become a hotbed of personal development and growth.”
  • Continuously improve: “Minimize waste, optimize for speed, costs, and ease of delivery, and to continuously improve the products/services offered.”
  • Automate everything you can: “Think of automation of not only the software development process (continuous delivery, including continuous integration and continuous deployment) but also of the whole infrastructure landscape by building next-gen container-based cloud platforms that allow infrastructure to be versioned and treated as code as well.”

To close it all, Calvin & Hobbes is required! 🙂

https://www.slideshare.net/kermisch/shifting-to-a-dev-ops-mindset-lnkd




Automating for the Future

1 01 2018

When we go about discussing on automation, we talk about frameworks and tools for automating the application or the user’s product. People, who want their applications to be automated, usually start off with taking up an open source tool (or commercially bought tool) and using it for a simple record and play script creation. This then starts the cycle of making those set of scripts more robust and make them work over the application. Finally, the scripts are joined together and the developers of those scripts start calling them frameworks. This is the beginning of the confusion and chaos for test automation.

It is the belief of testing teams that once a “framework” like this is created and it can then complete a regression cycle for a certain release or development, the same is the best piece of work they have created and it would work out for any and all releases they do from that time onwards. What they forget is the basic rule of any software that it evolves. And with it has to evolve the test software also. They create an application specific and tool specific “framework”, which might be just a combination of scripts, which execute the test cases for their application or product and nothing else. Sending out some rudimentary reports, which someone may one day see and realize that everything has been failing for the past 2 weeks 🙂

 

There is a plethora of test tools which are roaming the open source and commercial world of testing these days. They all are good for what they advertise themselves for. But there is an inherent problem with them all. They are generic (catering to out-of-box standards) in nature and require a framework to be developed over them, which will take care of specific needs for the user’s application and/or product.





Automating with Agile

31 12 2013

Agile is not a new word to the world of Information Technology. Automation has been said to be one of the key practices to making an Agile project possible. This in many ways may be considered true. I have been going through some good established practices of Agile, where most have been based on some basic level of automation, which helped in making a success of the project. I have penned down some thoughts on what it means to have test automation along with Agile practices.

There are many schools of thought which have gone into the agile way of doing a project and managing the various components that finally lead to the delivery of the project. When we consider Agile and its various derivatives, we come to realize that each Organization has their own way of dealing with the complexities that come with it. There is mention of starting with a session to discuss and elaborate on what the scope of the project is and how it can be broken into smaller pieces which then become the initial requirements. These can then be distributed to the team and made into story cards, attached with t-shirt sizes and put up on the scrum board to be picked up in batches by the team and worked on. In this fashion, a project progresses with the minimal of friction  and gets completed within the estimates provided by the t-shirt sizes. Most of the time this might not be entirely true, but this is how some Organizations perceive Agile and practice it; in the process giving negligent time for the automation, as manual tests take up the majority of time to complete.

When we talk of automation in Agile, it not only consists of the testing component, but an overall ‘continuous integration’ component. From the check-ins, build, unit tests, defect handling, integration & system tests to the final deployment on the ‘test’ server for doing the User Acceptance Testing (UAT).  Agile shops majorly miss out on this flow which should be the first thing to be completed for a Project to perform smoothly through its life-cycle. There are a multitude of tools available for making these tasks simpler and more robust, to name just a few which I have used – Jenkins/AntHill, Maven/MSBuild/make/Ant, SVN/Git, JIRA/FishEye, Crucible, TOSCA/QTP/pCloudy.com/Selenium.

In the path to Agile, we forget that we need to do the planning for automating our complete build-deploy process also and that includes the crucial part for integration & system tests. A thoughtful planning would be to make the initial framework using stubs for the interfaces and when these get built, replace them with the real thing. Often what I see is the perception that automation should be started when a clear and stable build is provided; yes, in a way this might be true, but not for Agile, where you really need to be agile and think on your feet. Start by implementing a strategy, wherein you have stubs ready and a CI platform available to make sure that testing can be done without code. This was the first lesson I was taught, we were to create test cases based on the ‘pseudo-algorithm’ and  the interfaces that we have written. The tests need to be developed in a way that all fail initially and as and when the code is delivered they start to pass according to the requirements provided.

If you have done this then you have taken that crucial step towards Agile automation, that will take you a long way in making the project a success for you and your Team.

 





Working with TOSCA (Part 2)

28 04 2013

This has been a long overdue post from my end, and as I now have some time at hand, thought it was better to put it down.

TOSCA has been promoted by Tricentis in Australia for the past 3+ years now and has risen from being an unknown tool in the ANZ markets to now in the 2nd position after the ever prevalent QTP (although under HP’s banner, it has undergone a lot of iterations and name changes also now). Tricentis has used the MBT principles to create TOSCA as an easy to use and implement tool. It allows the test team to concentrate on creating the actual workflow of the application, from the ‘artifacts’ provided in the initial ‘Requirement’ and ‘Test Case Design’ sections. From then, it is a simple case of either matching these test workflows with the appropriate screen objects (‘Modules’), or running them manually [yes, you can run ‘Test Case’ created in TOSCA as manual or automated tests]. TOSCA provides a section for ‘Reports’, which is in PDF format or from the ‘Requirement’ tab, which provides an overview of what has been created, what is automated and what has passed/failed. The ‘Execution List’ tab provides a simplistic way to define the different ways (and environments) in which you can run your test cases.

As I wrote in my previous post, TOSCA should be started from the Requirements of the application, where the application is broken into workflows and each is assigned a weight-age  This provides the base for creating the test cases in our ‘Test Case Design’ section.

The ‘Test Case Design’ is the interesting part (and claimed by Tricentis, as not being used by any other tool, as yet). Here you need to dissect the requirements and application to create each attribute and assign its relevant ‘equivalence partitioning‘. Sometimes this may not be necessary and  the TCD acts like a data sheet for the test team.

For most automation tools, you begin with the application and then match it with the requirements. TOSCA wants you to start from the requirements and build it to the actual tests. Then you add in the actual application and you are on the way to creating a well thought out automation or manual test practice.

Now TOSCA v7.6.x has come out with a new Cross-Browser testing concept called TBox. This allows you to create a ‘Module’ in one of the main browsers, and be used across IE, Chrome and FF.





Working with TOSCA

23 07 2012

For the past few months, I have been working on a new paradigm to Automation, with a “Model Based” tool from Tricentis – TOSCA. Overall, it is quite a different experience in using it. It does not contain any code, and builds from the requirements as a model of what the actual application will contain. The catch being that initially you do not need to define your test cases from the application end and things might not even be in sequence of what the actual final application would look like.

I have an analogy for this – a human body is composed of head, body, hands and legs. Each one has its own “attributes”, which in turn have “instances”. This is what is called the ‘Model-based approach’. Each hand will have attributes such as fingers, nails, elbow, fore-hand, wrist, etc. Then, all these attributes will have instances – long fingers, short fingers, thick fingers, etc. Now to build a body, you need to join all these “attributes” into a seamless body with the various parts working in tandem. This is what a test case would look like in TOSCA. With the initial parts of the body being the Test Case Design part. The joining together of the parts being the test case and the final infusion of blood being the execution and reporting [have not used Frankenstein here, as TOSCA tends to create a human rather than it’s alternate :-)]

TOSCA takes its roots in Object Oriented Modelling, employing concepts such as separation of concerns and encapsulation. In TOSCA, you can create classes, attributes and instances (objects). This modular breakdown makes the understanding and management of the actual requirements fairly simple; without going into how the final system under test would look like. I find this a very cool thing; although it took me some time to understand the concept in relation to the current bombardment of the existing Test Frameworks and Tools.

Again, the interface has a very intuitive design, which can be modelled according to the needs and quirks of the person working with it. People might argue here, that it is the same with Eclipse and other such tools like MS Visual Studio Test Professional, but the concept is totally different with TOSCA. You have the drag & drop capabilities, combined with a good integration across all the functionality provided from putting in the requirements to the final reporting; all in a single interface and tool, with support from a dedicated and technical team to get over the initial hiccups of using it.

The next good part, I found, was its capability to extend its technology adaptors (adaptors are used to automate tests against systems developed in various technologies, such as HTML, Java, .NET, Mainframe, Web Services, etc.) using the ubiquitous and simple VBScript and VBA; which is prevalent as the development language of choice in the Testing Community. I found this quite interesting, as we can now easily use TOSCA with almost any system, which we can code to make the underlying adaptor understand. For example, we had a hybrid mainframe green screen application to test (a rich Java GUI with an embedded mainframe emulator), which after a week’s work was ready to be tested with TOSCA; I have not come across such quick development cycles with other tools I worked with/on. That said, TOSCA has the capability to extend itself to different backend databases with the ease of just creating a simple module for it and using that module throughout your test cases to create a connection and then run your customized SQL queries.

If you start from the Requirement Definitions part, you can easily put in your current requirements and provide a measure of weight-age for each.

Then comes the part where you can extremely easily define the actions you can do on the objects which form your test cases. TOSCA by default defines 6 such actions – Do Nothing, Input, Output, Buffer, Verify and WaitOn, which take care of how a particular attribute defined earlier in the Test Design is taken action on.

More on this coming up soon…





Automation Tool across Web, Mobile and Web Services!

26 03 2012

Earlier in the week, I was sent across a request from one of our Senior Management on what could be a best tool, used for the automation of tests across the spectrum of Web (HTML & Flash), Mobile (iPhone, Android, Windows, etc.) and Web Services. What I could come up on this is the following. People may disagree with these options and may have different opinions and views on it… please feel free to comment and put them through, to improve on the content 🙂

Looking into the problem from the requirements viewpoint, I believe Selenium would be the tool best suited for the above automation work. The issue which might go against it, is that their Mobile product is still in Beta, and they are not the best for Web Services Testing, Watir being the frontrunner in the Open Source (i.e., Free) tools in that category. There are other Commercial Tools also which are available with good support and good interface, making it easier for the Automation to be maintained; which is somewhat of a problem with the Open Source tools, if not properly designed initially. Commercial products also have a big following and hence are cost-effective in the long run, although they might be expensive to procure, but getting a resource who is great in an Open Source product can sometimes be a big recruitment headache.

That said, Flash/Flex is a group, which almost with all tools requires a debug/special build to be provided for testing. Each tool has their own quirks and libraries with which the Flash/Flex application needs to be compiled with. So, you might wish to go more into each tools individual ability and reviews of their Flash library functionality; especially for Web Based applications.

Coming to mobile applications, the market for these exists as a very fragmented field for testing successfully. With Android Browser, iPhone Safari, IE Mobile and Firefox being the major browser contenders for the Automation tools available, along with testing of the Apps within the iOS, Android, Windows Phone and the various other vendors out there. I have seen many people refer to the Experitest SeeTestMobile tool, which might be becoming a tool of choice for many, these days.

I plan to go over some of the tools which might help out in each group, and some which might have multiple categories covered below. These opinions are my own through what I have experienced with them, and all are free to criticize and cajole me into making changes as is reflected “great” for them…

Selenium

Advantages: Good for Web GUI Testing. Great tools available for Firefox browser and the new WebDriver combined with PageObjects concept make it a great cross-browser test tool for the HTML/JavaScript Web. It even has a Flex/Flash plug-in for compatibility with the [debug/developer] flash applications. Can be coded in multiple languages (Java [most popular], Perl, PHP, Python, C#, etc.). This is a Free Open Source Tool.

Disadvantages: Not very intuitive, depends on coding skills and good design. New WebDriver is good, but there are not many in the market who can create some really good frameworks and know how to use it properly. Requires knowledge of XPath and JUnit type of coding to do anything great with the tool. Mobile product is still in Beta. Not many people available and consultation fees with consultants and resources can be high.

HP Quick Test Pro

Advantages: Well supported and lots of resources available who have certifications, but mostly used in Financial Institutions. Integrated add-ons for Flex, Web Services, Silverlight, and Web HTML. Framework issues can be easily taken care of with Odin AXE framework, which uses XML and simple interface.

Disadvantages: Ability to recognize complex UI and dynamic content hinders the tool. Mostly used in Data-driven web testing, which makes use of Excel sheets; easy for the user to use, but may cause issues in maintainability. Windows System only focused. Not suitable for Unix-Clones and Mac OS. High deployment costs.

MicroFocus / Borland SilkTest

Advantages: Good tool for Web and Flash. (MicroFocus has recently bought it after Borland failed, not sure of its development path going into the future). Has support for other platforms and operating systems.

Disadvantages: Learning curve, due to its test coding language. Not many people available with the tool knowledge.

Watir

Advantages: Good Open Source Tool for Web Services and Web Testing. Used with Fitnesse, produces easy to create and support web tests and web services tests. Not too good with Flash and Mobile.

Disadvantages: Uses Ruby as the language of choice, which is a skill getting hard to find for Testing.

 

SAHI

Advantages: Great tool for Web testing. Has good variety of plug-ins for the various other technologies. Available as Free version and supported paid version. Support for the same is great, the Developer of the tool is quite helpful in working out the issues with the Test Team. Good for complex websites, where other tools may sometimes fail. Unlike Selenium, it does not make use of XPath to identify objects; and can be used across browsers for recording tests.

Disadvantages: Only used for Web Testing for now. [not sure if it has been updated with plug-ins for others]. Limited use, thus not many people know about it.

 

SmartBear SoapUI

Advantages: Great tool for Web Services Testing from Smart Bear.

Disadvantages: Only useful for Web Services Testing. (but this might be an advantage, as they plan to make this a separate activity)

TestComplete

Advantages: Good tool, very similar to HP QTP, with a good interface and price. Overall good for Flash/Flex, with the included Libraries. SmartBear has a full stable of tools, which if bought together may be helpful in pricing and overall deployment and support. Uses VBScript/VBA for coding. People with QTP Experience may find it easy.

Disadvantages: Flash/Flex testing is still not very stable, sometimes fails to recognize the separate objects.

Microsoft Visual Studio Test Professional

Advantages: Is natively attached to the Visual Studio product line. Great for Cloud and .NET application testing. Good is you have Windows Phone applications. “CodedUI” is an excellent tool for testing cross-browser and web HTML testing. MS does deals to get the testing community to start using their tools 🙂

Disadvantages: Only for MS Technologies mostly. Not good for Firefox and Android. Only uses C# or Python.

Odin AXE Framework

Advantages: Great tool for building a wrapper over the existing tools scripts; actually it converts the tools identified objects into a XML recognizable format and has a great and easily understandable format for Automation testers.

Disadvantages: None that I can think of for now, except the use of a tool is somewhat a compulsory need for the framework created in AXE to work. Odin has done a good job of making the tool robust for Web Testing tools and it is compatible with almost all other commercial tools available.

Tricentis TOSCA

Advantages: Combines the best of Requirements, Test Case Design and Test Case execution, all in one single application. Good when there are business testers who know what the application is doing and there is good documentation available for doing it.

Disadvantages: Not very flexible when it comes to handling of unexpected behaviour within the application. Likes to have a clean interface to run through test cases and offer a “happy” path.

I can provide some more research into the new tools (and some less known but good ones), but the above are some of the common ones in use.

I am not advocating the use of any one tool above and to each depends on what he has worked with and would be comfortable in using.





Lessons from GUI Testing

11 01 2012

I recently started working on the GUI testing space again. This is an interesting space, with loads of commercial and open source tools being available. Although all the tools might have their own unique features which they bring to the fore; I realized that there are some basic fundamental steps, which need to be brought up to get things moving in the right direction. I have tried to put these steps as succinctly as possible in this post.

The initial step is to realize that although all GUI web based applications may vastly differ from each other, they have one common control which needs to be looked into – the ‘objects’ which create the page. Each web page (or for that matter GUI based application) would have these. Each tool has its own unique way of looking at and identifying these objects on the web page. The basic assumption being that the Dev’s have done a spot of good coding and provided meaningful and unique names to all the visible objects on a Web based application 🙂

Map these objects to the web applications page and half the work of automating the web based app is complete. The crucial part is that the automation engineer should realize that he has to use the names provided by him during this initial setup and mapping stage. We cannot rely on names provided by the Dev Team, as these may be generic and/or not properly worded; to provide the correct identification of the object on the web page.

So from my viewpoint, you need to start any GUI Automation by first mapping all the objects and providing proper names to these. With this work done, now arrange them into a proper flow, so that you create the required test scenario as has been provided by either the Business or the Customer. Having the initial mapping of the objects, is the biggest help that can be obtained. Will further post on the different tools and how to build this great library of objects with each tool.





Test Coverage – A Concept!

24 10 2011

These days I am trying to work on a concept known as Test Coverage. I call this a concept, as it starts off with something in the mind of the Management, fetters down to the Manager and finally is handed down to the Tester to carry out the said instructions. Without actually realizing, soon a graphical representation of our work comes out, in something which people call Business Intelligence (another much-hyped word these days, but will come to it later). The graphical representation goes on to show that the current set of tests which have been implemented/created, cover either “X” lines of code or “Y” number of Business Screens.

Is this a true representation of the complete scenario? Not what a Test Manager or a Dev Manager, who has enough thought process would like to think so. The above is a misnomer of how we go about treating an important issue like Test Coverage. Let me take you through a typical “Software Test Life Cycle” (don’t even start me off on that one). The requirements come out in the form of a BIG bunch of documentation, which has gone through various iterations and reviews with the Business people and the other Stake Holders involved (but rarely the Test Team). This bunch of neatly typed bundle is handed over to the Test Team in an official ceremony, which we call the “Beginning of the Test Cycle”. The Test Manager goes over this vast bundle of joyous documentation and then based on his “past” experiences, provides an estimate of what all will need testing and what test cases can be broadly done. This is called the “Estimation Period”, as usually a rough time period is provided, on when the Test Team will finish – includes Automation, Manual, Performance, Security and the jig-bang.

Once this “Estimation Period” is through, the task is handed over to the Leads to break down and offer an estimate, but based on what the Test Manager has already provided. Till this time, the actual team members are usually not taken into consultation, but the seniors of the Team are the confidants who will decide on what the underlings do. Finally a document starts taking shape, which for the sake of convenience we call the “Test Plan” or the “Test Strategy“, for want of a better name. This soon becomes the golden Bible/Vedas for the Test Team and they have to adhere to what has been said in it. Thereby the official STLC starts!

Once you have converted the BRD (Business Requirement Document) or the PRD (Product Requirement Document) to your test cases, you need to start actually implementing those test cases. This is the place where you start bringing in concepts like Test Matrix and Test Vectors, which in layman parlance (developer speak) mean the way that your tests are structured across the various data points for a particular view on the application. Now comes the really good part! This also lies the place where the above mentioned superior tester comes out and says that we are doing a Test Coverage of “X” lines of code, or a “Y” number of business screens (for GUI applications, which usually is 90% of tested applications). But does he actually know what he has covered with his test cases? Some do, while some have just made the assumptions, after reading blogs such as this one or from their superiors, who again might have obtained their knowledge from such places. The test cases are sorted out and some go over to the Automation Team to put in their regression suite, while others are manually vetted out and put through the paces of the “Bug Life Cycle”! (what this means to the globally scattered teams, depends on how much the management has spent of procuring a good issue reporting tool. My recommendation would be to look into Joel Spolsky’s FogBugz: http://www.fogcreek.com/fogbugz/). But to each his own …

Once the case of creating test cases and shoving them into the Automated Test Suite is completed, the Test Manager will jump and click a variety of buttons on his console (something which has been created by his Team to make life a brisk walk for him Or the Management has spent some more Money into procuring another one of those efficient tools out there). Thus, voila, a beautifully colored report of what passed and what failed, and specially “How much of Code/Screens were covered by our Testing”. Definitely a piece of Beauty for the Management!

But what is the real usefulness of such a report! In my honest opinion (IMHO), zilch… NIL! We did a good job of covering all the lines of code which were there, but did we cover the paths through which the code would be executed, I don’t think that is thought of even 25% of the time. Did we make sure that boundary values are covered? it might be that we have a few test cases making sure of this, but do they map to our coverage? Did we take care of the definite values that a few fields on our screen work on? No, this would be a definite gap most of the time… What we did do was this – a) Ensure that at least 85-90% of the code lines are covered by our test cases, executed using the Automated Scripts (Good! This might be an issue with doing through Manual tests, so no offence to Manual Testing here). b) Made sure that all the GUI screens are covered.

But, did we make sure that all the fields on that screen are covered, usually not. These are the places where we get issues. Also, most of time Negative testing is not given enough importance in such cases. The usual rant being – a) Did not have time. b) Is not that important, as such a case would not happen in Production? But these are important things and they convey the coverage of our tests. I will try to bring out more facets of this testing type in my next few posts and hopefully those are more helpful, than this one, which just rants about what is not being tested and/or how badly we test things …








  • Privacy
  • Design a site like this with WordPress.com
    Get started