Last Updated on August 19, 2024

Shared Responsibility Models, NIS2, DORA, or SOC 2 & ISO audits, accidental deletions, and the evolving threat landscape in SaaS apps confirms that DevOps Security becomes a priority. CISOs and DevOps teams need to meet halfway to secure data processed across GitHub, GitLab, and Atlassian, without compromising agility and efficiency. However, finding this middle ground is not an easy task.

In this episode of the DevOps Backup Masterclass, Gregory Zagraba discusses the common traps and shortcomings we saw in DevOps Backup and BCDR plans used by experienced CISOs – check whether those mistakes you are likely to make too.

You can read the complete transcript below or watch it on our YouTube channel

and of course, listen through the Spotify App – don’t forget to subscribe!

TRANSCRIPT – EPISODE 02

Hello everyone, once again, my name is Greg Zagraba, Pre-Sales Engineer at GitProtect, and today we’ve met up to discuss a little bit about challenges that CISOs, Chief Information Security Officers, are facing when dealing with DevOps security and all the different types of DevOps data. I’m glad to hear all of you with me today, hopefully, we have more people joining us. This live is not only for you if you are a CISO, but if you are working in DevOps, if you’re responsible for managing a development team, if you are sometimes thinking, hmm, are my data as secure as they can be, or can I trust my vendors such as GitHub or Atlassian 100%, this one is definitely for you.

Why DevOps landscape is bigger than you think

And as a little bit of warm-up, let’s start with one important question. How big is the DevOps landscape in terms of data? Because, when we think about DevOps, we are thinking quite a lot about some procedures, about some best practices in code development. When we look at Wikipedia and all the information coming from there, it sounds a little bit unclear.

However, from the perspective of you guys who are working in DevOps, and hopefully there are many DevOps engineers with us, this is also about the tools that you are working with, because those are the tools that are making your life easy, and the tools that you cannot imagine work without. Obviously, there are hundreds of different tools that you might be working with. There is Kubernetes, there is CircleCI, JFrog, tons of those.

But the tools that are building the foundation of our work as DevOps and as DevOps engineers are typically related to how we manage our source code. So where our developers are working, where they are creating the source code, where they are making comments when they are commenting and reviewing the work of each order. And three most popular platforms in that space are obviously GitHub, GitLab, and Big Bucket.

And on top of this, we are also very often working with Jira, especially in the software teams when we are planning our work in an agile environment with sprints, with our agile coaches, then we are working with Jira as well. And if we think about just those two types of tools, so first of all, our version control system, so something like GitHub or GitLab, and Jira to manage our project work, you can see how many different types of data there are. And when we think about the security of our data, we tend to only think about a small part of it.

Like we think, okay, we have source code, we have a repository, we have this Git file that is, for example, used to deploy our software. However, the important question is, how big part of your time as a developer, as a DevOps engineer, you are spending actually writing the code and how much time you are spending writing issues, answering issues, updating your Jira, assigning Jira tickets one-to-one order, planning releases, et cetera. So your work and the work of the whole team responsible for creating and releasing the software is focused around those data.

And this is just the first thing that’s a little bit of warmup as we are still at the beginning. When thinking about DevOps security, there’s more things to consider than you might initially assume. And if we were to talk about mistake number zero, kind of before we go to the main list, that CISOs make when thinking about DevOps security is sometimes underestimating the whole topic.

Because in many areas that CISOs take care of and are crucial for them, like, for example, GDPR, those areas are clear for us because we have some sort of regulations that showcase us. Those are important data. This are personal information.

Those are confidential information. Those are the things that we need to deal with. And because of how we are focused on the older stuff because of regulations, and because of how young DevOps relatively is when it comes to awareness of data and data security, and there’s not that much regulation in that space yet.

The only one that’s really touching the software development right now is DORA. We are realizing, we need to realize that there’s more to take care of than just some basic things. So you should really think about the separate data security strategy for your DevOps environments, and your DevOps environments typically run around those specific tools.

Top DevOps mistakes

And here, the important thing, there is a quote that I really like. I sometimes tend to even overuse it in terms of data security. Si vis pacem, para bellum. This is the Latin quote, which means if you want peace, prepare for war. My sister, who is a lawyer, always told me that when you are writing any sort of agreement, you don’t need to think about the optimal scenario or scenario as it is today. You need to think about the worst possible scenario.

And I apply the same logic to data security. When we are thinking about best practices, we need to think about scenarios when the war happens, when everything possible goes down. And this is also our major sin because lots of our security practices nowadays around DevOps are based on assuming that those platforms that we are working with, like GitHub, like Jira, like Bitbucket, are here today and they are going to be there forever.

Selective risk analysis

And without knowing it, we become too dependent on them. That’s why we need to approach data security of your DevOps data with a separate strategy. And the first such mistake that we make when we think about this strategy is selective risk analysis.

So whenever I’m talking with any CISO, I’m looking into business continuity and disaster recovery plan, or we are answering security assessments from our customers, there are certain things that we tend to focus on. And those things that we are focusing on are not necessarily the things that really matter. Because in most cases, the questions are how you answer natural disasters.

So how are you prepared to continue operating when the flood happens? Or how are you prepared to continue operating if there’s a fire in your server room? Or how are you prepared for major outages or downtimes, like, for example, a blackout scenario? Or we are just checking what regulations you meet in terms of compliance. So the way we approach data security tends to be sometimes superficial when we think about all the possible risks that can happen, because we focus on those guidebook things, the same things that are in our business continuity and disaster recovery plans, to be honest, since the 90s, since the early 21st century. We are thinking about the flood, the fire, the blackout, and major incidents like this.

However, if we look at the latest statistics about the reasons for incidents or breaches in data security, both for DevOps and overall, we can see that in 74% of cases, there was a human involved in the whole process, for example, by making an error or misuse of a platform. 83% of incidents had external actors like criminal organizations. 24% of incidents were ransomware [attacks].

So we can see that more serious risks in case of what makes breaches happen more often are external threats, so malicious actors, ransomware attacks, criminal organizations, or our own staff making some mistakes, either by accidental deletion of important data or sometimes by intentional human error, because sometimes we might lose the data by bulk uploading them without fitting, by forcing a push inside our GitHub or GitLab. And sometimes we can have an employee who is just angry at us, who just got a termination letter, and they decided that while they have permissions to the system, they can still do something to annoy their employer. So as you can see, those are the things that are more serious risks.

And obviously, on the one hand, those other incidents are causing breaches far less because we are better prepared for them, because they are our guidebook reasons for incidents. But this doesn’t mean that we can or we should underestimate those others. We should update our guidebooks.

We should update our BCP template and start considering things like ransomware, like human errors, and making sure that the tools that we have in place to protect our data and to allow continue working in case of any security incident are also prepared for scenarios like ransomware or like human-related incidents.

Misclassifying data to backup

Then the next mistake that, and this is especially popular one when we are talking about DevOps data, is not classifying data to backup. So let’s say that we’ve made the call, OK, we need to refresh our business continuity plan and our disaster recovery strategy with a specific place for our DevOps data, because this is where is our source code.

This is where we keep information about our business, our intellectual property. This is what runs our company in practice. And we think, OK, we need to update it in terms of risks, but now what data we should backup.

And typically, when somebody without enough knowledge approaches this topic, they think, OK, we have repos, repos, repos. Maybe we have Jira, I’m sorry, some other type of project management tool. But we are looking at our data as a one conglomerate.

We are thinking, OK, we have 6,000 repositories, 7,000 repositories, 500 repositories, and 100 Jira projects, let’s say. OK, but the question that I start asking my customers if they tell me something like this is that, OK, but out of this, how much are the data that matter? Why is that? Because if we try to backup all of this, treating it as a one block, so treating it as 6,000 repositories, let’s say, it will be extremely difficult to backup all of them at once, backup them all in an efficient way. And probably even keeping all this data in one storage is not even secure.

First of all, if you try to run such a backup of everything at once without looking into what sits in this data, probably you can make backup once per week at best. Secondly, when the incident happens and you need to make use of your backup and, for example, restore the data, then you would be restoring everything at once because our plan assumes that we backup everything and then we restore everything. And that’s not efficient.

So coming back to my question, which data really matters for you, we should break it down into mission-critical data, so software which is running our core infrastructure, software which is core of our product, software which is key application for our daily analysis, daily report, something that we cannot live without. We need to think about our ongoing projects data, so something that will stop people from continuing working. We don’t want to have people on idle because of the incident.

We want to allow people to get back to work as soon as possible, no matter if the data loss incident in DevOps happened. If you have an organization which is purely dedicated to software development and out of a sudden they would lose access to their GitHub account, they cannot continue working, like maybe for a day based on the things which they had noted down or their own devices, but they cannot continue working for prolonged time. And having such a team on idle costs you a ton of money, so you want them back to work as soon as possible.

So those are probably two things that you want to backup separately, backup with as high frequency as possible so that the data loss in case of incident is smallest here and you are able to restore those data as quickly as possible if needed. And other things, your intellectual property and your source code of your support systems of minor projects, minor improvements, maybe long-term R&D projects, or maybe just archive of projects that you’ve done for your customers in the past, either internal or external, this is rest relevant. Maybe we can back this up with a smaller frequency.

Maybe we can restore those later on, not in the first go. If we decide on doing this, we can, for example, split our thousand repositories into 200, 300 repositories that matter and 700, 800 repositories which are archived, which are backlogged, which are things that we can come back to in a long run. Thanks to this, we can optimize around those requirements.

And if you are going to tell me, but okay, how I can take all my data and split the way I back it up in appropriate way. Firstly, you obviously need to sit down with your team leads, with your project managers, with your security engineers and review this stuff to know more or less which repositories are important or which projects in your JIRA are important. And secondly, you need to have tooling in place which allows you to break down your backups and backup schedule into those different categories.

That’s why if you are not able to do it right now, it might be a good time to start looking for different technology to enable this.

Inadequate backup frequency

Another mistake that some of CISOs make is that we neglect a little bit backup frequency. So in many cases, when I’m talking to my customers about their initial expectations towards backup of their data, they say that weekly backup is fine or daily backup is fine or some small infrastructure.

And this is a good place to start. However, what we really should do is, first of all, do some analysis. What can be improved to have this backup frequency higher? Why is that? The higher backup frequency we have, the lower the data loss we have.

All of you CISOs are probably familiar with the term called recovery point objective, which is one of the metrics of how efficient our backup is. And the higher frequency we have of the backup, then as a result, we have lower RPO. And in case of incident, we are losing less data.

Because if you’re running backup every week, let’s say on Saturday, an incident happens on Friday, you are losing week worth of data. If you are having backup every hour, every three hours, then in case of incident, you lose only three hours worth of data on your most important data. Obviously, to improve it, there’s tons of things you need to consider.

First of all, outdated technology. So using, very often when you ask even your developers, however, back their code, they might mention we have scripts for doing it. However, scripts are outdated technology and not reliable and definitely not allowing you to run backups with high frequency.

Lack of automation in backup process

If somebody runs a script once per week, that’s for what scripts can do, that’s already perfect. Realistically, it’s less frequently than once a week. This is also related to lack of automation.

So when you are thinking about frequency of your backups, you also should think about how to implement a solution for automated backups. There are sometimes some more in-depth technical limitation like lack of resources, throttling, so API rate limits enforced by SaaS platforms like GitHub or Atlassian. Sometimes there are some artificial limits enforced by those platforms like Atlassian not allowing you to run backup more often than every 48 hours.

So on the one hand, there’s outdated technology topic. And on the second hand, there’s tons of technical topics that you really need to dive into. And this becomes sometimes a challenge when we try to approach it.

So it’s also good to consider taking on board somebody who analyzed those different limitations and can advise you the best possible backup frequency, knowing those limitations and knowing how to work around them. And as mentioned before, if we classify our data, we can optimize backup frequency because we separately backup what really matters. So first of all, how to fix this? First of all, think about how to improve backup duration.

So fix all those things that I’ve mentioned here or find trusted partner, expert in backup technology who can help you with doing so. Think about implementing incremental backups. Incremental backups allow you to backup only data that has been added or modified since the last backup you made.

This is extremely useful because thanks to this, you can even run backups once per hour or once every 30 minutes. That depends obviously on size of infrastructure, how many people are working on infrastructure at once, like how many developers you have making comments in GitHub, but we can work around this. Next one, and this is super neglected, monitor backup duration and determine average.

So once you start monitoring, for example, how long it takes us to backup our JIRA or how long it takes us to backup our Bitbucket, then you can come to the point, okay, it only takes us four hours to back it up. Why we are backing it once per day? Maybe we can back up it two times per day. Maybe if we have team working in multiple time zones, we can back it up three times per day.

So change the frequency around the average. And at the end of the day, you will get all the benefits of your RPO recovery point objective as low as possible. For best infrastructures I’ve seen, we’ve managed to keep RPO around under one hour, meaning that in the case of incident, you are losing only last hour worth of data.

And one of the worst ones I’ve seen in the large automotive company from states, they had RPO before we started working with them over a week, and they actually had incident when they lost one week’s worth of data. Another thing, lack of automation in backup process. And I’ve already touched on this point mentioning that lack of automation is probably why our backup frequency is not that great.

And I have one simple statistic to back this up. Average backup frequency for non-automated backup is every six months, meaning that if you find one of your employees probably a junior, let’s face it, you’re not going to give it to anybody important. If you find a junior and tell him, guy, this is a backup of our infrastructure, this is where you export your data from our JIRA, this is where you run a git clone of all our repositories, obviously without all the important metadata, but that’s all what we are capable to do, please do it.

If you’re going to tell this guy to do it, probably he’ll remember about doing so today, and in the next six months when you’re asking him, hey, how those backups are doing. And it might sound like a science fiction. However, this is not a statistic that I took from the top of my head.

I recommend checking the source so that you can see that this is really a problem. And not having automated backups in place nowadays with the pace of work, with the multitasking that employees are facing, effectively means that it’s like not having a backup at all. Because backup, which is six months old, is probably not usable at all.

Insufficient storage protection

Another mistake that we very often see, insufficient storage protection. So when we are thinking about backup of our most critical data, our financial records, personal records, our technology in terms of, for example, information about how our hardware is built, medical records, if some of you are working in the med industry. In cases like this, typically we are employing state-of-the-art storage to protect your data.

However, with DevOps, this is typically neglected quite a lot. We think that, okay, local device of mine on the own developer is storage good enough. Or push it in some bucket in AWS as free and keep data there and it’s good enough.

And important information, one storage for DevOps data is not enough. Would you feel safe in keeping most vital financial records of your customers in just one place? In encrypted zip? If I would ask you this, if we have Cesar here with experience in that, feel free to share your thoughts on this. But in most cases, you would say hell no.

This data is too important and too sensitive to approach it like this. Same applies for source code and metadata from your GitHub, GitLab, Bitbucket, Azure DevOps, et cetera. Why is that? Because those data are either core of a product that you are selling, core of a software that makes your business running, or we are part of your product that if you are selling modern hardware.

So you can’t imagine losing your intellectual property. Think about some custom in-house application that your developer has developed five years ago that makes your company run every day. Maybe it’s your timesheet app.

Maybe it’s the top, maybe it’s app that is used to pay the bills of the company, like custom ERP. Imagine that company loses this today. How big of a hit is this? Probably it would stop company’s operations for a long time.

So protecting source code of this application is equally important as protecting your most sensitive data. That’s why we should apply all the same practices for storing this data when we run backup of it. We should keep those data in multiple different storages.

We should apply three-to-one backup rule and have storage replication. We should have encryption of this data, both at rest and in transit. We should apply immutable storage, assuring that after the data is put at a secure location, nobody can modify it, nobody can delete it without you knowing.

Also, you should think about how to enable ransomware protection for this data, because if we have ransomware attack, as we’ve seen, 83% of cases with external actor causing the security breaches. So quite high probability. If we have such a scenario, you want to have your backup data protected from ransomware attack.

Easiest way to do it, store them in a format which is non-readable and non-executable. First of all, even if somebody with still credentials get into your storage with your data, they still cannot make any use of it. They cannot see how your systems, how your product, how application important from your business is built and running.

Second thing, if we have ransomware attacks that are based on encrypting your data and cutting you off from them, files should be stored in a non-executable format, because those types of attacks are done from within the storage. If the files are kept in non-executable format, this means that any malicious software designed to encrypt your data and lock you out of them won’t be able to execute as well. So that’s another thing to consider.

And ask yourself, first of all, are the storages that you use for your most critical data, as you believe, even applying to those best practices? I hope they do. And if yes, are we applying the same thing to our DevOps data? Or are we just saying that all the source code is distributed on the computers of all our developers and it’s good enough?

Too narrow Disaster Recovery scenarios

Another thing, too narrow disaster recovery scenarios. In many cases, when we want to restore data, let’s say that we utilize GitHub on a daily basis, and we make some sort of snapshot from GitHub, we are assuming that we make backup of data from GitHub, we make Git clone, and in case of incident, we are going to restore those data back to GitHub and continue working there.

Question is, what will happen if GitHub has an outage? How will you continue working then? And that’s something that people typically don’t think about. That’s why when we are thinking about backup of our DevOps data, first of all, you should be able to switch between platforms. You can’t be dependent on GitHub.

You can’t be dependent on GitLab. You can’t be dependent on Atlassian. You need to be able to switch between the platforms and not to make empty threads out of this.

2017, the biggest GitLab security incident there ever was. Some of our customers lose access to their GitLab infrastructure and all the data for two weeks. And ask yourself right now, if you lose access to software that runs your business or is a critical part of your infrastructure for two weeks, how it would impact your business? How big would be the losses? If your developers cannot work for two weeks, how big would be the losses? Because of their idleness.

So you need to be able to switch between the platform. You also need to be independent from the cloud infrastructure as well. I know that everybody is now talking about cloud and cloud only.

Some vendors like Atlassian are even forcing their customers to move over to cloud. However, you still should be prepared that in case of incident, you should have some sort of on-premise infrastructure that you can come back to, even temporarily. You need to be also able, especially in terms of DevOps data, to restore them without overwriting them.

If we are talking about those most common incidents that are related sometimes to accidental human deletion of data by your staff, by your employees, you probably don’t need to bring back the whole JIRA workspace or whole GitHub organization if somebody messed up one repository by accident. You want to be able to restore just one of those if necessary. Moving up next, what makes a reliable disaster recovery solution for your DevOps in that case? First of all, you need to be prepared for all scenarios.

So if service has an outage like GitHub or GitLab, have this cross-restore capability. Be able to switch from cloud to on-premise or from GitHub to GitLab, from Bitbucket to GitLab, and all the other way. Make sure that you’re able to get back to work as soon as possible.

Have so-called granular restore capability. Give yourself possibility to restore the data without overwriting any of them and only restore the things that matter. For example, only the projects that are ongoing projects and continue working on them rather than restoring the whole thing, which will take substantial time.

If you lost access to your account, be prepared to make restore to another account, even a free one. I had a case like this when I had a customer using GitHub and due to ransomware attack, by the way, done by one of their subcontractors, they lost access to their GitHub environment. And not only they needed to restore the data, they wanted to restore it back to GitHub but to different account because their original one wasn’t available.

So we have possibility of this like this. Don’t be tied to one account only. Sometimes we realize too late that some data is lost.

Be able to make some so-called point-in-time restore. So enable yourself to come back in time as much as possible and restore data from two weeks ago, three weeks ago, three months ago. This is also important for ransomware protection because if we have incident when your codebase is infected, typically the attack doesn’t occur immediately.

It occurs three months later, six months later. Attackers are patient. So consider a fact that, OK, if in six months time I realize that my codebase is infected and my backups from the past six months are infected too, you should be able to restore data from seven months ago.

So have platform that first of all allows you to store backups for such prolonged time. And secondly, have platform that allows you to restore this data. I’ve met recently backup vendor which told me that they can only keep the three latest copies and they run copy once per day, meaning that you can only restore, you know, if we have Wednesday today, I can restore backup from Monday. OK, what if I have incident that my code has been infected two weeks ago? Too bad.

Not testing backups

Another thing, not testing backups. And I recently was speaking at the conference at the full room of people and I asked how many of you have tested your backups.

Of any type, not necessarily DevOps, any backup ever. And only one person raised their hands. Why testing backups is so important? Only 38% of Git backups end up working as expected when attempts to restore is made.

Meaning that if you try today, if you even told your junior development team to make this Git clone or to run some script, and today you are trying to restore data from it, in two out of three cases, it’s not going to work. You’re not going to restore your data. Too bad.

And why you need this? You need this because security incidents of different vendors that you rely on, like Jira, like GitLab, like GitHub, happen all the time. So you never know when something will happen that impacts you and your environment. There are vulnerabilities discovered all the time.

And if only one out of three cases, you can restore your data, then it’s almost like you don’t have a backup at all. That’s why it’s extremely crucial to test your backups. Obviously not every week.

Maybe not even every month. But once per quarter, have this exercise with your team. When you try to restore a couple most important repositories, and one Jira project, and see if it works.

Once per year, when it’s time for your pen tests and your annual audits, try to bring back the whole infrastructure and see if it works. Personally, I think that once per year is not enough. You should do it more often.

But do it sometimes. And especially if you make any changes to your backup strategy, to the way you run your backups, or just major changes in how your platform is configured. For example, how your GitHub is configured.

Afterwards, try to restore data as well. Because I’ve also seen scenarios when somebody changed something in their configuration, either in Jira or in GitHub, for example, changed permission settings or added some branch restrictions, and suddenly their backup stopped working, or their backups were lacking essential data. So if you make any major change, test your backups as well.

Lack of proper backup monitoring

Another thing, lack of proper backup monitoring. Backup monitoring is actually what makes you need to test your backups less. Because if you don’t have monitoring in place, then you only believe that every backup you made works and is complete.

And that’s only your belief. And you can verify it only if you test it. However, modern backup solutions give you monitoring capability.

So first of all, they let you know if the backup ran according to schedule, has the backup connected properly with the source of your data. So has it connected properly with your GitHub, with your GitLab, with your Bitbucket, with your Jira? Has it connected properly with the storage where you keep the data? Has it fetched all the data it was supposed to fetch? Has it got access denied to some of this data? Were some issues with compression? Were some issues with encryption? So basically, you have the summary of the backup. And of course, you still should be able to test it.

Because the successful backup is not only about fetching all the data. It’s also about having a restore strategy that works. And you need to test your restores more than testing your backups.

But you have at least some indication. I had a case like this when my customer had a very robust script that was supposed to fetch repositories using GitClone and then fetch the metadata using API. And due to some mistake in it, they haven’t been capturing deployment keys and some other metadata.

But deployment keys is what sits in my head. And they haven’t been capturing for three months. And they only realized it when they tried to restore data.

And luckily, they made a test restore. Because there was no monitoring in place. There are many modern solutions on the market with very detailed audit logs, with possibility to integrate into tools that you use every day.

They are available online. Think about it. Worst case scenario, if you’re still running some script, make sure that the script is linked to your Splunk or some other monitoring tool that you have in place.

No scalability of backup and DR

Another problem, lack of scalability of your backups and disaster recovery strategy. In many cases, we don’t ask ourselves if my backups are ready to handle larger infrastructure. Why is that? In many cases, when we make backup of the data, or we plan the strategy for the backup, we consider the things as is.

And what should be considered is work. And because of how important software development becomes nowadays, the data size increase for DevOps is one of the highest in the world overall. Because every day we are adding the new lines of code, which is a cliche saying, but it’s true.

We’re adding more repositories, more projects, more files, we push more data, we make more pull requests. We start new projects in Jira. We raise new tickets in Jira.

We add 78 comments in Jira as well. And we are not ready for the increasing size of those infrastructures. I have one customer who woke up one day realizing that they have one repository in their organization.

In total, they have 1000 repositories, which is already not that small. And they had one repository, which had 11,000 pull requests. And the repo itself had over 20 gigabytes.

And one day they realized that although we can make backup of it, they are not able to make restore of it. Because the solution that we use for restoring the data is not able to handle it. So to know if your solution is scalable, you just need to ask yourself a couple of questions.

How often you test restore of your data? Because maybe between today and three months ago, when you last make your test, somebody started a new project, which is especially heavy, which is especially growing fast. Maybe we hired more developers. And now our backup is not sufficient.

How often we test restoring of the data? And if this restore full, do we test all the scenarios? Are we only restoring the easy peasy GitHub to GitHub restore or Jira to Jira? Or are we testing extreme scenarios, like from cloud back to on premise? Or the other way around? Are we checking if those are working? Do we have auto scalability on the platforms that are using up resources? Have we stress tested our backup solution? Have we tried to kill it with this repository weighing 20 gigabytes and seeing how it how it will act? Are we monitoring our storage when it comes to available space? Do we monitor this free space and do we forecast when we’re going to run out of it? Obviously, on some platforms like AWS and Azure, we have auto scalability of storage and increasing, but it’s also sometimes tied with the budget limit. And even if we have a to scalability, the budget that we set up in our Azure, let’s say, will limit us.

Trusting cloud vendors/service providers too much

And last mistake. I put it last because I believe it’s the most important one and I want to fall into your memory. Trusting cloud vendors too much. Let me, many of the companies say service provider like GitHub, like GitLab, like Jira, like Atlassian, which provides me with Jira, protects my data with their own backup.

I don’t need extra solution for it. And I encourage you CISOs to ask your development team tomorrow, are they backing up their source called their Jira? And this is probably the answer that you’re going to get. And is it true? Can you really rely on it? Let me show you those numbers.

And normally I ask people to guess what those numbers mean, but I’m not going to keep you in the excitement because we are sitting here for 45 minutes right now. I’m not going to give you answer what those numbers mean. Nothing.

Some people say they are numbers of security incidents that happened with all those most popular DevOps solution vendors last year. GitHub had 165 security incidents. GitLab had 76 security incidents.

Atlassian at the forefront had over 200. And what’s more important, not only they are unreliable because they have this number of incidents, they are also putting in their terms of service that they are not taking responsibility for your data loss or damages to them. And those terms that are very brutal to many businesses are nicely called shared responsibility model.

Same you can see for GitHub, same you can see for Atlassian which tells that you cannot use their backups to roll back changes and to retrieve your data. For example, if something was deleted. Because they are saying we are only taking care of our platform, its hosting, of our system, but everything that you do in the platform, your users, their actions and data you put inside them is your own responsibility.

Meaning that you can’t really rely on your vendors. We are giving you a great tool. We are supporting this tool.

But you shouldn’t trust them completely. You should have some third party solution for backing up this data in place or some strategy in place for backing it up. And those are those 10 lessons that I had for you guys today.

Obviously, this is just the peak of the iceberg. And I was sure that I wasn’t able to cover everything, although I did my best. So if you have any questions about how our platform, how to protect your DevOps data, how to implement backup of your source code, your intellectual property, data from your GitHub, from JIRA, feel free to reach out to me.

Feel free to reach out to GitProtect team. I encourage you to visit GitProtect.io. And we’ll be more than happy to assist you. Thank you very much for your time today.

Thank you for attending this LinkedIn Live. And hopefully we’ll see each other soon during our next LinkedIn Live. Take care.

Cheers. Bye-bye.

Comments are closed.

You may also like