The Evolution of GitOps to Environments as a Service 

Way back in 2013, I sat down for a chat with SiliconAngle founder John Furrier and Wikibon Analyst Dave Vellante inside theCUBE at Strata Conference 2013, it was in this interview, I mentioned the “Git Effect.” John appreciated the idea immediately, and discussed the concept with another technologist in a subsequent interview.

None of us knew the massive scale of our foresight.

When Git was adopted by enterprises, it had some modifications to the purist vision of peer-2-peer network of trust, which Linus Torvalds used as a foundation to build Git upon. The biggest modification was the introduction of a central main repository. Since Git was all about decentralization and let the most trusted commit elevate, it initially sounded counterintuitive.

In open-source with developers donating their time, the cost of rejected commits was not significant. In an enterprise, every developer hour costs a lot of money so some process is needed to make sure there is no duplicate effort. There needed to be the most trusted repository which was pristine enough to be used for production deployments. In Linux, all the best changes made it to Linus's repo which everyone then cloned. What could be done in an enterprise Git to enable the same?


The Era of the Pull Request

While an enterprise does not have an unlimited supply of developers like in open-source, there still needs to be a mechanism to vote on for a change or commit – this is where the concept of Pull Request comes in.

A developer forks the main repository and commits appropriate changes then opens a pull request. Other developers look at the change from their own lens and if there are no objections, the pull request gets merged to the main repository.



There are various definitions of GitOps depending upon where you look. The easiest place to look is within the term itself which is a portmanteau of Git and Operations. In other words, using the power of Git to help operations.

Operations start with setting up infrastructure which traditionally has been created manually with custom scripts like Ansible, Chef, and Puppet. There is also no versioning control associated with these infrastructure scripts. GitOps changes it by storing infrastructure creation scripts in Git.

Making the infrastructure operational is much more than storing it in a source code control system. According to GitLabs, GitOps consists of 3 parts:

  1. Infrastructure as Code (IaC)
  2. Merge Requests (aka Pull Requests)
  3. Continuous Integration/Continuous Development (CI/CD



Hold on… am I dealing with source code or infrastructure code and where is Ops?

There are multiple moving parts here. The first question is what is actually going into a repository. (We’ll discuss this at a later time).

In the text above, we covered what different parts are there in the GitOps equation, here are the moving parts:

  1. Repository
  2. Code Type
  3. Provision


1. Repository

There are patterns and there are anti-patterns. In the case of repository design, there are two candidates: mono-repo and multi-repo. The challenge is which one is a pattern and which one is anti-pattern depends upon the eyes of the beholder. That being said, in our opinion, mono-repo is an anti-pattern that is not very helpful in GitOps. The reason is, Git treats the whole repository as one blob and then applies changes to it. So mono-repo may work for smaller code bases but large codebases are an operational nightmare.


2. Code Type

Since Infrastructure is also saved in Git, the code can be infrastructure as code (IaC) or plain old source code.


3. Provision

Writing both cloud infrastructure and application infrastructure as code is good but there needs to be a way to bring it to life. Provisioning for production environments is most mature with infrastructure being provisioned with operations like Terraform apply while service provisioning is done with continuous deployment solutions.

The area completely ignored is how to provision it for pre-production environments. This is the Wild Wild West with unmaintainable custom scripts creating the majority of the environments.



Convention over Configuration:
Infrastructure as Code 2.0

IaC is good but not coding infrastructure is even better. 

This is where the role of leading Environments-as-a-Service or EaaS platforms like Roost comes into play. One of the reasons a simple "apply" command in leading IaC tools can not work is that there is no one definition of how many pre-production environments are needed. The traditional approach has been to create separate pipelines for each pre-production environment like QA, product, staging, etc. This is the reason, it is important to have a different first-class citizen to base these environments on. 

I have one rule whenever you are in doubt:
Treat each software artifact or component as a state machine.

If we think of a collection of all pre-production environments as one state machine, we will realize that state change in this machine is always reflected in a pull request. Therefore if we follow pull-request as an anchor, all problems with environment creation get solved.

Let's do the following thought experiment:

Discard all pre-production environments. Assume we have a simple project. One perfect developer, one production server, one codebase. It's a simple development-production state machine. The developer can manage the complete code base as it's small and every change is done by this perfect developer. There is no need for a version control system yet.


Perfect Developer and a Mortal Product Manager

The Developer is Perfect, the Product Manager is Not

Scenario 1
The product manager (PM) asked the developer to work on a feature but later realized this feature was not needed (for some random reason). Now the developer needs to restore both production and the codebase to the previous state. This creates a need for a source code versioning control system as well as a deployment tool with a rollback feature.

Scenario 2

The Product Manager asked the developer if there is a way for PM to see how a feature works before it gets pushed to production. This created the need for a staging environment.


But, the Developer is Really Mortal

Now since the developer is also mortal, he/she is going to make a few mistakes. These mistakes need to be discovered before the product manager sees it in a live release. This issue specifically created a need for a test environment.

Stay tuned... to be continued…in an upcoming post.



LI-WP Advantages of Ephemeral Enviroments 2

Rishi Yadav

About Rishi Yadav

Rishi is the CEO and Co-Founder of Roost and has over two decades of experience in leading enterprise application teams. He is a published author and active blogger.

Please Share this Blog

You may find these blog posts of interest too.

Accelerate Software Delivery Releases - Eliminate Staging Environment
Accelerate Software Delivery Releases - Eliminate Staging Environment
March 16, 2022

Current Testing and Staging Processes are Slow & Cumbersome Current DevOps pipelines have multiple test environments...

How Ephemeral Environments Works with Infrastructure as Code (IaC)
How Ephemeral Environments Works with Infrastructure as Code (IaC)
March 16, 2022

The Roost platform on-demand creates an ephemeral environment to test each and every change and release it to production...

Environment management a top challenge for companies using Kubernetes
Environment management a top challenge for companies using Kubernetes
March 16, 2022

According to Gartner® research, "By 2025, multicluster management and security will emerge as top challenges for organiz...