In business, you have to develop constantly so as not to be left behind. Especially in the world of new technologies. Here, modern solutions are not only a fashion but a real benefit, such as the faster operation of our applications or easier implementation of new functionalities. For example, something called Continuous Integration / Continuous Delivery has existed in IT for a long time and it is not by accident.

This solution allows us to improve the workflow and significantly accelerate the release of new software versions. In some projects, each code merge to the main branch uploads a new version of the application to the server. Now it does not impress us anymore (although many projects still, for various reasons, do not have such a solution), but it is still a trend in IT that some still strive for.

Trends in IT and why GitHub CI/CD is still there 

When we pay attention, for example, to the differences between such services as GitHub and GitLab, or to the most popular applications available in the marketplace of such brands, the automation, and support for CI/CD solutions are immediately noticeable. But what does this have to do with the topic of backup? Well, contrary to appearances, quite a lot. Many modern IT solutions are not used accidentally, they simply bring real benefits.

Let’s go back many years when SVN was the most popular VCS on the market. Then suddenly a new player appeared – Git. The initial trend led the industry to a point where 94% of developers use Git in their daily work (according to Stack Overflow 2021 Survey), and the portal OpenHub indicates that as much as 73% of repositories are Git-based. This is due solely to the many advantages of this system, not the fad of the moment.

Continuous Integration and Continuous Delivery

I will come back to this aspect for a moment because it is quite important from the perspective of the topic of this article. As I mentioned, it is now standard in many companies. The use of Continuous Integration allows us to minimize technical errors, introduce more frequent integration, and better code quality. Continuous Delivery, in turn, is responsible for the automation of the delivery of introduced changes to the appropriate environments. The possibility for additional special steps, such as restarting the server or running special scripts, is also a nice addition. Here we can distinguish stages such as Build, Test, and Publish, and such automation creates a modern pipeline.

Thanks to such solutions, some projects do not have any release cycles, because every change in the code, as long as it is correct and does not contain errors, immediately goes to the production server, and our clients can immediately use new functions.

I don’t think I need to describe here how brilliant this solution is! Most of the industry probably remembers the process of carefully choosing the release date, many days of preparation, overtime, and several hours of downtime of the application because you have to upload a new version. I wish all of us that it would be just an anecdote in a while.

Backup as part of the development process

Now let’s add another step to this well-known flow. The need for backups is obvious. How up-to-date such a copy should be, well.. it sometimes is still an issue for discussion. Some say that when we lose access to the external repository, or there is a failure, then someone has an “almost up-to-date” local copy and we will lose a few commits at worst. Well, as for me it is not entirely true. But even if so, any loss is unacceptable, the more that we can easily avoid it!

Apart from the possible loss of certain data, in my opinion, more important is the time lost to repair and restore the repository to the full working ability. 30 minutes of lack of access to the repository may not sound scary, but what if we have 300 programmers in our company? Suddenly, these 30 minutes become 150 hours. Can we afford such a loss with a slight hand? Of course, it won’t ruin our business (this time), but it’s just a huge waste.

So how to avoid it? As I mentioned above, to the well-known Build, Test and Publish scheme, let’s add the Back it up step. Such a step should of course create a backup outside the repository we are using. We can use, for example, the solutions available in GitLab, GitHub Actions, or professional and fully manageable repository and metadata backup software such as GitProtect.io. Seemingly a small step, but it changes the way we approach our pipelines and makes us sure that every working version has a backup.

We must be aware of the importance of a correct backup, and above all, its quick recovery. The costs associated with it are often invisible at first glance. Because how to evaluate the loss of a few commits or the lack of access to the repository for several minutes? Or how much does it cost us to manually create scripts and how does this compare to the prices of third-party solutions? The important thing is that we currently cannot afford any data loss, end of topic!

Is automated GitHub backup the new black?

And now I will try to play a fairy and foretell the future. It goes without saying that the growing popularity of GitHub CI/CD solutions. More and more companies use it every year, and the development of the GitHub Actions feature only proves this. In addition, the growing popularity of third-party GitHub backup solutions, such as GitProtect.io – which you can test for free here or install directly from GitHub Marketplace or Atlassian Marketplace – also gives us a signal. It seems to me that this is the next direction, i.e. extending our pipeline with an additional step to ensure proper source code protection. This allows us to easily recover data in the event of a failure or simple human mistake and provides user-friendly interfaces to control this process. Am I right? It remains to be seen, but it is worth thinking about it now and implementing similar solutions in our projects.

👉 This is the second article in our new DevSecOps series – which opened with the GitLab CI/CD article.

Comments are closed.

You may also like